Once you figure out how to get your model to go find the context it needs (for me this usually comes down to really good error messages that feel a bit like a prompt injection attack) and you figure out how to keep the tasks small and uniform-ish such that a passing test for a previous (supervised) task becomes a reason that that output can now be used as context for how to complete the next (unsupervised) task, agents can be pretty darn reliable.
Maybe 50% of the problems we solve are repetitive enough for this to make sense, and 50% of those are uniform enough that models are overkill, and 50% of those are too small to be worth investing in the necessary scaffolding. But if you're looking at a problem that's in that magical 12.5%, a property constrained agent is absolutely the way to go.
Marketing is being done really well in 2025, with brands injecting themselves into conversations on Reddit, LinkedIn, and every other public forum. [1]
CEOs, AI "thought leaders," and VCs are advertising LLMs as magic, and tools like v0 and Lovable as the next big thing. Every response from leaders is some variation of https://www.youtube.com/watch?v=w61d-NBqafM
On the ground, we know that creating CLAUDE.md or cursorrules basically does nothing. It’s up to the LLM to follow instructions, and it does so based on RNG as far as I can tell. I have very simple, basic rules set up that are never followed. This leads me to believe everyone posting on that thread on Cursor is an amateur.
Beyond this, if you’re working on novel code, LLMs are absolutely horrible at doing anything. A lot of assumptions are made, non-existent libraries are used, and agents are just great at using tokens to generate no tangible result whatsoever.
I’m at a stage where I use LLMs the same way I would use speech-to-text (code) - telling the LLM exactly what I want, what files it should consider, and it adds _some_ value by thinking of edge cases I might’ve missed, best practices I’m unaware of, and writing better grammar than I do.
Edit:
[1] To add to this, any time you use search or Perplexity or what have you, the results come from all this marketing garbage being pumped into the internet by marketing teams.
> if you’re working on novel code, LLMs are absolutely horrible
This is spot on. Current state-of-the-art models are, in my experience, very good at writing boilerplate code or very simple architecture especially in projects or frameworks where there are extremely well-known opinionated patterns (MVC especially).
What they are genuinely impressive at is parsing through large amounts of information to find something (eg: in a codebase, or in stack traces, or in logs). But this hype machine of 'agents creating entire codebases' is surely just smoke and mirrors - at least for now.
I know I could be eating my words, but there is basically no evidence to suggest it ever becomes as exceptional as the kingmakers are hoping.
Yes it advanced extremely quickly, but that is not a confirmation of anything. It could just be the technology quickly meeting us at either our limit of compute, or it's limit of capability.
My thinking here is that we already had the technologies of the LLMs and the compute, but we hadn't yet had the reason and capital to deploy it at this scale.
So the surprising innovation of transformers did not give us the boost in capability itself, it still needed scale. The marketing that enabled the capital, that enables that scale was what caused the insane growth, and capital can't grow forever, it needs returns.
Scale has been exponential, and we are hitting an insane amount of capital deployment for this one technology that, has yet to prove commercially viable at the scale of a paradigm shift.
Are businesses that are not AI based, actually seeing ROI on AI spend? That is really the only question that matters, because if that is false, the money and drive for the technology vanishes and the scale that enables it disappears too.
It did but it's kinda stagnated now especially on the LLM front. The time when ever week a groundbreaking model came out is over for now. Later revisions of existing models, like GPT5 and llama4 have been underwhelming.
GPT5 may have been underwhelming to _you_. Understand that they're heavily RLing to raise the floor on these models, so they might not be magically smarter across the board, there are a LOT of areas where they're a lot better that you've probably missed because they're not your use case.
every time i say "the tech seems to be stagnating" or "this model seems worse" based on my observations i get this response. "well, it's better for other use cases." i have even heard people say "this is worse for the things i use it for, but i know it's better for things i don't use it for."
i have yet to hear anyone seriously explain to me a single real-world thing that GPT5 is better at with any sort of evidence (or even anecdote!) i've seen benchmarks! but i cannot point to a single person who seems to think that they are accomplishing real-world tasks with GPT5 better than they were with GPT4.
the few cases i have heard that venture near that ask may be moderately intriguing, but don't seem to justify the overall cost of building and running the model, even if there have been marginal or perhaps even impressive leaps in very narrow use cases. one of the core features of LLMs is they are allegedly general-purpose. i don't know that i really believe a company is worth billions if they take their flagship product that can write sentences, generate a plan, follow instructions and do math and they are constantly making it moderately better at writing sentences, or following instructions, or coming up with a plan and it consequently forgets how to do math, or becomes belligerent, or sycophantic, or what have you.
to me, as a user with a broad range of use cases (internet search, text manipulation, deep research, writing code) i haven't seen many meaningful increases in quality of task execution in a very, very long time. this tracks with my understanding of transformer models, as they don't work in a way that suggests to me that they COULD be good at executing tasks. this is why i'm always so skeptical of people saying "the big breakthrough is coming." transformer models seem self-limiting by merit of how they are designed. there are features of thought they simply lack, and while i accept there's probably nobody who fully understands how they work, i also think at this point we can safely say there is no superintelligence in there to eke out and we're at the margins of their performance.
the entire pitch behind GPT and OpenAI in general is that these are broadly applicable, dare-i-say near-AGI models that can be used by every human as an assistant to solve all their problems and can be prompted with simple, natural language english. if they can only be good at a few things at a time and require extensive prompt engineering to bully into consistent behavior, we've just created a non-deterministic programming language, a thing precisely nobody wants.
Claude Sonnet 4.5 is _way_ better than previous sonnets and as good as Opus for the coding and research tasks I do daily.
I rarely use Google search anymore, both because llms got that ability embedded and the chatbots are good at looking through the swill search results have become.
"it's better at coding" is not useful information, sorry. i'd love to hear tangible ways it's actually better. does it still succumb to coding itself in circles, taking multiple dependencies to accomplish the same task, applying inconsistent, outdated, or non-idiomatic patterns for your codebase? has compliance with claude.md files and the like actually improved? what is the round trip time like on these improvements - do you have to have a long conversation to arrive at a simple result? does it still talk itself into loops where it keeps solving and unsolving the same problems? when you ask it to work through a complex refactor, does it still just randomly give up somewhere in the middle and decide there's nothing left to do? does it still sometimes attempt to run processes that aren't self-terminating to monitor their output and hang for upwards of ten minutes?
my experience with claude and its ilk are that they are insanely impressive in greenfield projects and collapse in legacy codebases quickly. they can be a force multiplier in the hands of someone who actually knows what they're doing, i think, but the evidence of that even is pretty shaky: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
the pitch that "if i describe the task perfectly in absolute detail it will accomplish it correctly 80% of the time" doesn't appeal to me as a particularly compelling justification for the level of investment we're seeing. actually writing the code is the simplest part of my job. if i've done all the thinking already, i can just write the code. there's very little need for me to then filter that through a computer with an overly-verbose description of what i want.
as for your search results issue: i don't entirely disagree that google is unusable, but having switched to kagi... again, i'm not sure the order of magnitude of complexity of searching via an LLM is justified? maybe i'm just old, but i like a list of documents presented without much editorializing. google has been a user-hostile product for a long time, and its particularly recent quality collapse has been well-documented, but this seems a lot more a story of "a tool we relied on has gotten measurably worse" and not a story of "this tool is meaningfully better at accomplishing the same task." i'll hand it to chatgpt/claude that they are about as effective as google was at directing me to the right thing circa a decade ago, when it was still a functional product - but that brings me back to the point that "man, this is a lot of investment and expense to arrive at the same result way more indirectly."
You asked for a single anecdote of llms getting better at daily tasks. I provided two. You dismissed them as not valuable _to you_.
It’s fine that your preferences aren’t aligned such that you don’t value the model or improvements that we’ve seen. It’s troubling that you use that to suggest there haven’t been improvements.
> Yes it advanced extremely quickly, but that is not a confirmation of anything. It could just be the technology quickly meeting us at either our limit of compute, or it's limit of capability.
To comment om this, because its the most common counter argument. Most technology has worked in steps. We take a step forward, then iterate on essentially the same thing. It's very rare we see order of magnitude improvement on the same fundamental "step".
Cars were quite a step forward from donkeys, but modern cars are not that far off from the first ones. Planes were an amazing invention, but the next model of plane is basically the same thing as the first one.
I agree, I think we are in the latter phase already. LLMs were a huge leap in machine learning, but everything after has been steps on top + scale.
I think we would need another leap to actually meet the markets expectations on AI. The market is expecting AGI, but I think we are probably just going to do incremental improvements for language and multi modal models from here, and not meet those expectations.
I think the market is relying on something that doesn't currently exist to become true, and that is a bit irrational.
Transformers aren't it, though. We need a new fundamental architecture and, just like every step forward in AI that came before, when that happens is a completely random event. Some researcher needs to wake up with a brilliant idea.
The explosion of compute and investment could mean that we have more researchers available for that event to happen, but at the same time transformers are sucking up all the air in the room.
Several people hinted at the limits this technology was about to face, including training data and compute. It was obvious it had serious limits.
Despite the warnings, companies insisted on marketing superintelligence nonsense and magic automatic developers. They convinced the market with disingenous demonstrations, which, again, were called out as bullshit by many people. They are still doing it. It's the same thing.
The question in your last paragraph is not the only one that matters. Funding the technology at a material loss will not be off the table. Think about why.
> I know I could be eating my words, but there is basically no evidence to suggest it ever becomes as exceptional as the kingmakers are hoping.
??? It has already become exceptional. In 2.5 years (since chatgpt launched) we went from "oh, look how cute this is, it writes poems and the code almost looks like python" to "hey, this thing basically wrote a full programming language[1] with genz keywords, and it mostly works, still has some bugs".
I think the goalpost moving is at play here, and we quickly forget how 1 year makes a huge difference (last year you needed tons of glue and handwritten harnesses to do anything - see aider) and today you can give them a spec and get a mostly working project (albeit with some bugs), 50$ later.
I feel like the invention of MCP was a lot more instrumental to that than model upgrades proper. But look at it as a good thing, if you will: it shows that even if models are plateauing, there's a lot of value to unlock through the tooling.
> invention of MCP was a lot more instrumental [...] than model upgrades proper
Not clear. The folks at hf showed that a minimal "agentic loop" in 100 LoC [1] that gives the agent "just bash access" still got very close to SotA with all the bells and whistles (and surpassed last year models w/ handcrafted harnesses).
I don't disagree with you on the technology, but mostly my comment is about what the market is expecting. With such a huge capex expenditure it is expecting a huge returns. Given AI has not proven consistent ROI generally for other enterprises (as far as I know), they are hoping for something better than what is right now and they are hoping for it to happen before the money runs out.
I am not saying it's impossible, but there is no evidence that the leap in technology to reach wild profitability (replacing general labour) such investment desires is just around the corner either.
To phrase this another way, using old terms: We seem to be approaching the uncanny valley for LLMs, at which point the market overall will probably hit the trough of disillusionment.
Let say we found a company that already realized 5-10% of savings in the first step. Now, based on this we might be able to map out the path to 25-30% savings in 5% steps for example.
I personally haven’t seen this, but I might have missed it as well.
Three years? One year ago I tried using LLMs for coding and found it to be more trouble than it was worth, no benifit in time spent or effort made. It's only within the past several months that this gas changed, IMHO.
That there is a bubble is absolutely certain. If for no other reason, than because investors don't understand the technology and don't know which companies are for real and which are essentially scams, they dump money into anything with the veneer of AI and hope some of it sticks. We're replaying the dotcom bubble, a lot of people are going to get burned, a lot of companies will turn out to be crap. But at the end of the dotcom crash we had some survivors standing above the rest and the whole internet thing turned out to have considerable staying power. I think the same will happen with AI, particularly agentic coding tools. The technology is real and will stick with us, even after the bubble and crash.
I have had LLMs write entire codebases for me, so it's not like the hype is completely wrong. It's just that this only works if what you want is "boring", limited in scope and on a well-trodden path. You can have an LLM create a CRUD application in one go, or if you want to sort training data for image recognition you can have it generte a one-off image viewer with shortcuts tailored to your needs for this task. Those are powerful things and worthy of some hype. For anything more complex you very quickly run into limits and the time and effort to do it with an LLM quickly approaches the time and effort required to do it by hand.
They're powerful, but my feeling is that largely you could do this pre-LLM by searching on Stack Overflow or copying and pasting from the browser and adapting those examples, if you knew what you were looking for. Where it adds power is adapting it to your particular use case + putting it in the IDE. It's a big leap but not as enormous a leap as some people are making out.
Of course, if you don't know what you are looking for, it can make that process much easier. I think this is why people at the junior end find it is making them (a claimed) 10x more productive. But people who have been around for a long time are more skeptical.
> Where it adds power is adapting it to your particular use case + putting it in the IDE. It's a big leap but not as enormous a leap as some people are making out.
To be fair, this is super, super helpful.
I do find LLMs helpful for search and providing a bunch of different approaches for a new problem/area though. Like, nothing that couldn't be done before but a definite time saver.
Finally, they are pretty good at debugging, they've helped me think through a bunch of problems (this is mostly an extension of my point above).
Hilariously enough, they are really poor at building MCP like stuff, as this is too new for them to have many examples in the training data. Makes total sense, but still endlessly amusing to me.
Doing it the old fashioned lazy way, copy-pasting snippets of code you search for on the internet and slightly modifying each one to fit with the rest of your code, would take me hours to achieve the kind of slop that claude code can one shot in five minutes.
Yeah yeah, call me junior or whatever, I have thick skin. I'm a lazy bastard and I no longer care about the art of the craft, I just want programs tailored to my tastes and agentic coding tools are by far the fastest way to get it. 10x doesn't even come close, it's more like 100x just on the basis of time alone. Effort? After the planning stage I kick back with video games while the tool works. Far better than 100x for effort.
i have seen so many people say that, but the app stores/package managers aren't being flooded with thousands of vibe coded apps, meanwhile facebook is basically ai slop. can you share your github? or a gist of some of these "codebases"
You seem critical of people posting AI slop on Facebook (so am I) but also want people to publish more AI slop software?
The AI slop software I've been making with Claude is intended for my own personal use. I haven't read most of the code and certainly wouldn't want to publish it under my own name. But it does work, it scratches my itches, fills my needs. I'm not going to publish the whole thing because that's a whole can of worms, but to hopefully satisfy your curiosity, here is the main_window.py of my tag-based file manager. It's essentially a CRUD application built with sqlite and pyside6. It doesn't do anything terribly adventurous, the most exciting it gets is keeping track of tag co-occurances so it can use naive Bayesian classifiers to recommend tags for files, order files by how likely they are to have a tag, etc.
> "the app stores/package managers aren't being flooded with thousands of vibe coded apps"
The state of claude code presently is definitely good enough to churn out low effort shovelware. Insofar as that isn't evidently happening, I can only speculate about the reasons. In no order, it may be one or several of these reasons: Lots of developers feel threatened by the technology and won't give it a serious whirl. Non-developers are still stuck in the mindset of writing software being something they can't do. The general public isn't as aware of the existence of agentic coding tools as we on HN are. The appstores are being flooded with slop, as they always have been, and some of that slop is now AI slop, but doesn't advertise this fact, and the appstore algorithms generally do some work to suppress the visibility of slop anyway. Most people don't have good ideas for new software and don't have the reflex to develop new software to scratch their itches, instead they are stuck in the mentality of software consumers. Just some ideas..
> Current state-of-the-art models are, in my experience, very good at writing boilerplate code or very simple architecture especially in projects or frameworks where there are extremely well-known opinionated patterns (MVC especially).
Which makes sense, considering the absolutely massive amount of tutorials and basic HOWTOs that were present in the training data, as they are the easiest kind of programming content to produce.
1. LLM's would suck at coming up with new algorithms.
2. I wouldn't let an LLM decide how to structure my code. Interfaces, module boundaries etc
Other than that, given the right context (the sdk doc for a unique hardware for eg) and a well organised codebase explained using CLAUDE.Md they work pretty well in filling out implementations. Just need to resist the temptation to prompt while the actual typing would take seconds.
Yep, LLMs are basically at the "really smart intern" level. Give them anything complex or that requires experience and they crash and burn. Give them a small, well-specified task with limited scope and they do reasonably well. And like an intern they require constant check-ins to make sure they're on track.
Of course with real interns you end up at the end with trained developers ready for more complicated tasks. This is useful because interns aren't really that productive if you consider the amount of time they take from experienced developers, so the main benefit is producing skilled employees. But LLMs will always be interns, since they don't grow with the experience.
My experience is opposite to yours. I have had Claude Code fix issues in a compiler over the last week with very little guidance. Occasionally it gets frustrating, but most of the time Claude Code just churns through issue after issue, fixing subtle code generation and parser bugs with very little intervention. In fact, most of my intervention is tool weaknesses in terms of managing compaction to avoid running out of context at inopportune moments.
It's implemented methods I'd have to look up in books to even know about, and shown that it can get them working. It may not do much truly "novel" work, but very little code is novel.
They follow instructions very well if structured right, but you can't just throw random stuff in CLAUDE.md or similar. The biggest issue I've run into recently is that they need significant guidance on process. My instructions tends to focus on three separate areas: 1) debugging guidance for a given project (for my compiler project, that means things like "here's how to get an AST dumped from the compiler" and "use gdb to debug crashes" (it sometimes did that without being told, but not consistently; with the instructions it usually does tht), 2) acceptance criteria - this does need reiteration, 3) telling it to run tests frequently, make small, testable changes, and to frequently update a detailed file outlining the approach to be taken, progress towards it, and any outcomes of investigation during the work.
My experience is that with those three things in place, I can have Claude run for hours with --dangerously-skip-permissions and only step in to say "continue" or do a /compact in the middle of long runs, with only the most superficial checks.
It doesn't always provide perfect code every step. But neither do I. It does however usually move in the right direction every step, and has consistently produced progress over time with far less effort on my behalf.
I wouldn't have it start from scratch without at least some scaffolding that is architecturally sound yet, but it can often do that too, though that needs review before it "locks in" a bad choice.
I'm at a stage where I'm considering harnesses to let Claude work on a problem over the course of days without human intervention instead of just tens of minutes to hours.
It is like, when you need some prediction (e.g. about market behavior), knowing that somewhere out there there is a person who will make the perfect one. However, instead of your problem being to make the prediction, now it is how to find and identify that expert. Is that type of problem that you converted yours into any less hard though?
I too had some great minor successes, the current products are definitely a great step forward. However, every time I start anything more complex I never know in advance if I end up with utterly unusable code, even after corrections (with the "AI" always confidently claiming that now it definitely fixed the problem), or something usable.
All those examples such as yours suffer from one big problem: They are selected afterwards.
To be useful, you would have to make predictions in advance and then run the "AI" and have your prediction (about its usefulness) verified.
Selecting positive examples after the work is done is not very helpful. All it does is prove that at least sometimes somebody gets something useful out of using an LLM for a complex problem. Okay? I think most people understand that by now.
PS/Edit: Also, success stories we only hear about but cannot follow and reproduce may have been somewhat useful initially, but by now most people are beyond that, willing to give it a try, and would like to have a link to the working and reproducible example. I understand that work can rarely be shared, but then those examples are not very useful any more at this point. What would add real value for readers of these discussions now is when people who say they were successful posted the full, working, reproducible example.
EDIT 2: Another thing: I see comments from people who say they did tweak CLAUDE.md and got it to work. But the point is predictability and consistency! If you have that one project where you twiddled around with the file and added random sentences that you thought could get the LLM to do what you need, that's not very useful. We already know that trying out many things sometimes yields results. But we need predictability and consistency.
We are used to being able to try stuff, and when we get it working we could almost always confidently say that we found the solution, and share it. But LLMs are not that consistent.
My point is that these are not minor successes, and not occasional. Not every attempt is equally successful, but a significant majority of my attempts are. Otherwise I wouldn't be letting it run for longer and longer without intervention.
For me this isn't one project where I've "twiddled around with the file and added random sentences". It's an increasingly systematic approach to giving it an approach to making changes, giving it regression tests, and making it make small, testable changes.
I do that because I can predict with a high rate of success that it will achieve progress for me at this point.
There are failures, but they are few, and they're usually fixed simply by starting it over again from after the last succesful change when it takes too long without passing more tests. Occasionally it requires me to turn off --dangerously-skip-permissions and guide it through a tricky part. But that is getting rarer and rarer.
No, I haven't formally documented it, so it's reasonable to be skeptical (I have however started packaging up the hooks and agents and instructions that consistently work for me on multiple projects. For now, just for a specific client, but I might do a writeup of it at some point) but at the same time, it's equally warranted to wonder whether the vast difference in reported results is down to what you suggest, or down to something you're doing differently with respect to how you're using these tools.
I had a highly repetitive task (/subagents is great to know about), but I didn't get more advanced than a script that sent "continue\n" into the terminal where CC was running every X minutes. What was frustrating is CC was inconsistent with how long it would run. Needing to compact was a bit of a curveball.
The compaction is annoying, especially when it sometimes will then fail to compact with an error, forcing rewinding. They do need to tighten that up so it doesn't need so much manual intervention...
That is true, so don't give it entirely free reign with that. I let Claude generate as many additional tests as it'd like, but I either produce high level tests, or review a set generated by Claude first, before I let it fill in the blanks, and it's instructed very firmly to see a specific set of test cases as critical, and then increasingly "boxed in" with more validated test cases as we go along.
E.g. for my compiler, I had it build scaffolding to make it possible to run rubyspecs. Then I've had it systematically attack the crashes and failures mostly by itself once the test suite ran.
Is it? Stuff like ripgrep, msmpt,… are very much one-man project. And most packages on distro are maintained by only one person. Expertise is a thing and getting reliable results is what differentiates expert from amateurs.
Coding with Claude feels like playing a slot machine. Sometimes you get more or less what you asked, sometimes totally not. I don’t think it’s wise or sane to leave them unattended.
If you spend most of your time in planning mode, that helps considerably. It will almost always implement whatever it is that you planned together, so if you're willing to plan extensively enough you'll more or less know what you're going to get out of it when you finally set it loose.
I found that using opus helps a lot. It's eyewateringly expensive though so I generally avoid it. I pay through the API calls because I don't tend to code much.
Genuinely interesting how divergent people's experiences of working with these models is.
I've been 5x more productive using codex-cli for weeks. I have no trouble getting it to convert a combination of unusually-structured source code and internal SVGs of execution traces to a custom internal JSON graph format - very clearly out-of-domain tasks compared to their training data. Or mining a large mixed python/C++ codebase including low-level kernels for our RISCV accelerators for ever-more accurate docs, to the level of documenting bugs as known issues that the team ran into the same day.
We are seeing wildly different outcomes from the same tools and I'm really curious about why.
> Beyond this, if you’re working on novel code, LLMs are absolutely horrible at doing anything. A lot of assumptions are made, non-existent libraries are used, and agents are just great at using tokens to generate no tangible result whatsoever.
Not my experience. I've used LLMs to write highly specific scientific/niche code and they did great, but obviously I had to feed them the right context (compiled from various websites and books convered to markdown in my case) to understand the problem well enough. That adds additional work on my part, but the net productivity is still very much positive because it's one-time setup cost.
Telling LLMs which files they should look at was indeed necessary 1-2 years ago in early models, but I have not done that for the last half year or so, and I'm working on codebases with millions of lines of code. I've also never had modern LLMs use nonexistent libraries. Sometimes they try to use outdated libraries, but it fails very quickly once they try to compile and they quickly catch the error and follow up with a web search (I use a custom web search provider) to find the most appropriate library.
I'm convinced that anybody who says that LLMs don't work for them just doesn't have a good mental model of HOW LLMs work, and thus can't use them effectively. Or their experience is just outdated.
That being said, the original issue that they don't always follow instructions from CLAUDE/AGENT.md files is quite true and can be somewhat annoying.
> Not my experience. I've used LLMs to write highly specific scientific/niche code and they did great, but obviously I had to feed them the right context (compiled from various websites and books convered to markdown in my case) to understand the problem well enough. That adds additional work on my part, but the net productivity is still very much positive because it's one-time setup cost.
I've been genuinely surprised how well GPT5 does with rust! I've done some hairy stuff with Tokio/Arena/SIMD that I thought I would have to hand hold it through, and it got it.
Yeah, it has been really good in my experience. I've done some niche WASM stuff with custom memory layouts and parallelism and it did great there too, probably better than I could've done without spending several hours reading up on stuff.
It's pretty good at Rust, but it doesn't understand locking. When I tried it. It just put a lock on everything and then didn't take care to make sure the locks were released as soon as possible. This severely limited the scalability of the system it produced.
But I guess it passed the tests it wrote so win? Though it didn't seem to understand why the test it wrote where the client used TLS and the server didn't wouldn't pass and required a lot of hand holding along the way.
I've experienced similar things, but my conclusion has usually been that the model is not receiving enough context in such cases. I don't know your specific example, but in general it may not be incorrect to put an Arc/Lock on many things at once (or using Arc isntead of Rc, etc) if your future plans are parallelize several parts of your codebase. The model just doesn't know what your future plans are, and in errs on the side of "overengineering" solutions for all kinds of future possibilities. I found that this is a bias that these models tend to have, many times their code is overengineered for features I will never need and I need to tell them to simplify - but that's expected. How would the model know what I do and don't need in the future without me giving all the right context?
The same thing is true for tests. I found their tests to be massively overengineered, but that's easily fixed by telling them to adopt the testing style from the rest of the codebase.
> and it adds _some_ value by thinking of edge cases I might’ve missed, best practices I’m unaware of, and writing better grammar than I do.
This is my most consistent experience. It is great at catching the little silly things we do as humans. As such I have found them to be most useful as PR reviewers which you take with a pinch of salt
> It is great at catching the little silly things we do as humans.
It's great, some of the time, the great draw of computing was that it would always catch the silly things we do as humans.
If it didn't we'd change the change code and the next time (and forever onward) it would catch that case too.
Now we're playing wack-a-mole and pleading with words like "CRITICAL" and bold text to our in .cursorrules to try and make the LLM pay attention, maybe it works today, might not work tomorrow.
Meanwhile the C-suite pushing these tools onto us still happily blame the developers when there's a problem.
> It's great, some of the time, the great draw of computing was that it would always catch the silly things we do as humans.
People are saying that you should write a thesis-length file of rules, and they’re the same people balking at programming language syntax and formalism. Tools like linters, test runners, compilers are reliable in a sense that you know exactly where the guardrails are and where to focus mentally to solve an issue.
> brands injecting themselves into conversations on Reddit, LinkedIn, and every other public forum.
Don't forget HackerNews.
Every single new release from OpenAI and other big AI firms attracts a lot of new accounts posting surface-level comments like "This is awesome" and then a few older accounts that have exclusively posted on previous OpenAI-related news to defend them.
It's glaringly obvious, and I wouldn't be surprised if at least a third of the comments on AI-related news is astroturfing.
Sam Altman would agree with you that those posts are bots and lament it, but would simultaneously remain (pretend to be?) absurdly oblivious about his own fault in creating that situation.
Or the "I created 30 different .md instruction files and AI model refactored/wrote from scratch/fixed all my bugs" trope.
> a third of the comments on AI-related news is astroturfing.
I wouldn't be surprised if it's even more than that.. And, ironically, probably aided in their astroturfing, by the capability of said models to spew out text..
I personally always love the “I wrote an entire codebase with claud” posts where the response to “Can we see it?” is either the original poster disappearing into the mist until the next AI thread or “no I am under an NDA. My AI-generated code is so incredible and precious that my high-paying job would be at risk for disclosing it”
NDA on AI generated code is funny since model outputs are technically someone else’s code. It’s amazing how we’re infusing all kinds of systems with potential license violations
If anyone actually believed those requests to see code were sincere, or if they at least generated interesting discussion, people might actually respond. But the couple of times I've linked to a blog post someone wrote about their vibe-coding experience in the comments, someone invariably responds with an uninteresting shallow dismissal shitting all over the work. It didn't generate any interesting discussion, so I stopped bothering.
And I think, in this blog post, the author stated that he does heavy editing of what’s generated. So I don’t know how much time is saved actually. You can get the same kind of inspiration from docs, books, or some SO answer.
Honestly I've generated some big ISH codebases with AI and have said so and then backed off when asked.. because a) I still want to try to establish more confidence in the codebase and b) my employment contract gleefully states everything I write belongs to my employer. Both of those things make me nervous.
That said, I have no doubt there are also bots setting out to generate FOMO
My experience is kind of the opposite of what you describe (working in big tech). Like, I'm easily hitting 10x levels of output nowadays, and it's purely enabled by agentic coding. I don't really have an answer for why everyone's experience is so different - but we should be careful to not paint in broad strokes our personal experience with AI: "everyone knows AI is bad" - nope!
What I suspect is it _heavily_ depends on the quality of the existing codebase and how easy the language is to parse. Languages like C++ really hurts the agent's ability to do anything, unless you're using a very constrained version of it. Similarly, spaghetti codebases which do stupid stuff like asserting true / false in tests with poor error messages, and that kind of thing, also cause the agents to struggle.
Basically - the simpler your PL and codebase, the better the error and debugging messages, the easier it is to be productive with the AI agents.
> we know that creating CLAUDE.md or cursorrules basically does nothing
While I agree, the only cases where I actually created something barely resembling useful (while still of subpar quality) was only after putting in CLAUDE.md lines like:
YOUR AIM IS NOT TO DELIVER A PROJECT. YOU AIM IS TO DO DEEP, REPETITIVE E2E TESTING. ONLY E2E TESTS MATTER. BE EXTREMELY PESSIMISTIC. NEVER ASSUME ANYTHING WORKS. ALWAYS CHECK EVERY FEATURE IN AT LEAST THREE DIFFERENT WAYS. USE ONLY E2E TESTS, NEVER USE OTHER TYPES OF TEST. BE EXTREMELY PESSIMISTIC. NEVER TRUST ANY CODE UNLESS YOU DEEPLY TEST IT E2E
REMEMBER, QUICK DELIVERY IS MEANINGLESS, IT'S NOT YOUR AIM. WORK VERY SLOWLY, STEP BY STEP. TAKE YOUR TIME AND RE-VERIFY EACH STEP. BE EXTREMELY PESSIMISTIC
With this kind of setup, it kind attempts to work in a slightly different way than it normally does and is able to build some very basic stuff although frankly I'd do it much better so not sure about the economics here. Maybe for people who don't care or won't be maintaining this code it doesn't matter but personally I'd never use it in my workplace.
The answer is really trivial and really embarrassingly simple, once you remove the engineering/functional/world improvement goggles. The answer is: because the rich folks invested a ton of money and they need it to work. Or at least to make most of the white collar work dependent on it, quality be damned. Hence the ever increasing pushing, nudging, advertising, offering to use the crap-tech everywhere. It seems now it will not win over the engineers. Unfortunately it seems to work with most of the general population. Every lazy recruiter out there is now using chatgpt to generate job summaries and "evaluate" candidates. Every "office worker" of the general type deadweight you meet at every company is happy to use it to produce more powerpoints, slides and documents for you drown in. And I won't even mention the "content" business model of the influencers.
At our place we have two types of users. One that is a deep evangelist, says it revolutionised their office work and has no idea that it might have accuracy problems. I guess those are the people that just create a lot of hot air.
The others tried it and ran into the obvious Achilles heels and are now pretty cautious. But use it for a thing or two.
They're using it, yes, but it's still heavily subsidized by VC, and it reminds to be seen whether it will remain as popular as prices percolate upwards.
Either way, layoffs all the same if this "doesn't work".
I actually hope to find better answers here than on cursor forum where people seems to be basically saying "it's you fault" instead of answering the actual question which is about trust, process, and real world use of agents..
So far it's just reinforcing my feeling that none of this is actually used at scale.. We use AI as relatively dumb companions, let them go wilder on side projects which have loser constraints, and Agent are pure hype (or for very niche use cases)
What specific improvements are you hoping for? Without them (in the original forum post) giving concrete examples, prompts, or methodology – just stating "I write good prompts" – it's hard to evaluate or even help them.
They came in primed against agentic work flow. That is fine. But they also came in without providing anything that might have given other people the chance to show that their initial assumptions was flawed.
I've been working with agents daily for several months. Still learning what fails and what works reliably.
Key insights from my experience:
- You need a framework (like agent-os or similar) to orchestrate agents effectively
- Balance between guidance and autonomy matters
- Planning is crucial, especially for legacy codebases
Recent example: Hit a wall with a legacy system where I kept maxing out the context window with essential background info. After compaction, the agent would lose critical knowledge and repeat previous mistakes.
Solution that worked:
- Structured the problem properly
- Documented each learning/discovery systematically
- Created specialized sub-agents for specific tasks (keeps context windows manageable)
Only then could the agent actually help navigate that mess of legacy code.
So at what point are you doing more work on the agent than working on the code directly ? And what are you losing in the process of shifting from the code author to LLM manager ?
My experience is that once I switch to this mode when something blows up I'm basically stuck with a bunch of code that I sort of know, even tough I reviewed it. I just don't have the same insight as I would if I wrote the code, no ownership, even if it was committed in my name. Like any misconceptions I've had about how things work I will still have because I never had to work through the solution, even if I got the final working solution.
The reason why OP is getting terrible results is because he's using Cursor, and Cursor is designed to ruthlessly prune context to curtail costs.
Unlike the model providers, Cursor has to pay the retail price for LLM usage. They're fighting an ugly marginal price war. If you're paying more for inference than your competitors, you have to choose to either 1) deliver equal performance as other models at a loss or 2) economize by way of feeding smaller contexts to the model providers.
Cursor is not transparent on how it handles context. From my experience, it's clear that they use aggressive strategies to prune conversations to the extent that it's not uncommon that cursor has to reference the same file multiple times in the same conversation just to know what's going on.
My advice to anyone using Cursor is to just stop wasting your time. The code it generates creates so much debt. I've moved on to Codex and Claude and I couldn't be happier.
> Or is the performance of those models also worse there?
The context and output limit is heavily shrunk down on github copilot[0].
That's the reason why for example Sonnet 4.5 performs noticeably worse under copilot than in claude code.
Github Copilot is likely running models at or close to cost, given that Azure serves all those models. I haven't used Copilot in several months so I can't speak to its performance. My perception back then was that its underperformance relative to peers was because Microsoft was relatively late to the agentic coding game.
I've had agents find several production bugs that slipped me (as I couldn't dedicate enough time to chase down relatively obscure and isolated bug reports).
Of course there are many more bugs they'll currently not find, but when this strategy costs next to nothing (compared to a SWE spending an hour spelunking) and still works sometimes, the trade-off looks pretty good to me.
Exactly, the actual business value is way smaller people think and its honestly frustrating. Yes they can write boilerplate, yes they sometimes do better than humans in well understood areas. But its negligible considering all the huge issues that come with them.
Big tech vendorlocks, data poisoning, unverifiable information, death of authenticity, death of creativity, ignorance of LLM evangelists, power hungriness in a time where humanity should look at how to decrease emissions, theft of original human work, theft of data big tech gets away with since way too long. Its puzzling to me how people actually think this is a net benefit to humanity.
Most of the issues you listed are moral and not technical. Especially "power hungriness in a time where humanity should look at how to decrease emissions", this may be what you think humanity should do but that is just that, what you think.
I derive a lot of business value from them, many of my colleagues do too. Many programmers that were good at writing code by hand are having lots of success with them, for example Thorsten Ball, Simon Willison, Mitchell Hashimoto. A recent example from Mitchell Hashimoto: https://mitchellh.com/writing/non-trivial-vibing.
>Its puzzling to me how people actually think this is a net benefit to humanity.
I've used them personally to quickly spin up a microblog where I could post my travel pictures and thoughts. The idea of making the interface like twitter (since that's what I use and know) was from me, not wanting to expose my family and friends to any specific predatory platform like twitter, instagram, etc was also from me, supabase as the backend was from a colleague (helped a lot!), the code was all Claude. The result is that they were able to enjoy my website, including my grandparents that just had to paste an URL on the website. I like to think of it a a perhaps very small but net benefit for a very small part of humanity.
Is it a moral judgement to say that when the stove is on fire, we shouldn't be pouring more grease on it?
Is it a moral judgement to say that you shouldn't pick up a bear cub with its mother nearby?
If neither of these are moral judgements, then why would it be a moral judgement to say that humanity should be seeking to reduce its emissions? Just because you personally don't like it, and want to keep doing whatever you like?
from a cursory (heh) reading of the cursor forum, it is clear that the participants in the chat are treating ai like the adeptus mechanicus treats the omnissiah.... the machine spirits aren't cooperating with them though.
> what prompted this post? well just tried to work with gpt5 and gemini pro
that's the problem. GPT5 doesn't work for coding. literally it burns tokens and does nothing my in experience. Claude 3.5, 4 and 4.5, on the other hand, are pretty solid and make lots of forward progress with minimal instruction. It takes iteration, some skill, and some hand coding! Yes, they forget things and do random things sometimes, but for me it's a big boost.
I think what we should really ask ourselves is: “Why do LLM experiences vary so much among developers?”
The simplest explanation would be “You’re using it wrong…”, but I have the impression that this is not the primary reason. (Although, as an AI systems developer myself, you would be surprised by the number of users who simply write “fix this” or “generate the report” and then expect an LLM to correctly produce the complex thing they have in mind.)
It is true that there is an “upper management” hype of trying to push AI into everything as a magic solution for all problems. There is certainly an economic incentive from a business valuation or stock price perspective to do so, and I would say that the general, non-developer public is mostly convinced that AI is actually artificial intelligence, rather than a very sophisticated next-word predictor.
While claiming that an LLM cannot follow a simple instruction sounds, at best, very unlikely, it remains true that these models cannot reliably deliver complex work.
Another theory: you have some spec in your mind, write down most of it and expect the LLM to implement it according to the spec. The result will be objectively a deviation from the spec.
Some developers will either retrospectively change the spec in their head or are basically fine with the slight deviation. Other developers will be disappointed, because the LLM didn't deliver on the spec they clearly hold in their head.
It's a bit like a psychological false memory effect where you misremember and/or some people are more flexibel in their expectations and accept "close enough" while others won't accept this.
This is true. But, it's also true of assigning tasks to junior developers. You'll get back something which is a bit like what you asked for, but not done exactly how you would have done it.
Both situations need an iterative process to fix and polish before the task is done.
The notable thing for me was, we crossed a line about six months ago where I'd need to spend less time polishing the LLM output than I used to have to spend working with junior developers. (Disclaimer: at my current place-of-work we don't have any junior developers, so I'm not comparing like-with-like on the same task, so may have some false memories there too.)
But I think this is why some developers have good experiences with LLM-based tools. They're not asking "can this replace me?" they're asking "can this replace those other people?"
> They're not asking "can this replace me?" they're asking "can this replace those other people?"
People in general underestimate other people, so this is the wrong way to think about this. If it can't replace you then it can't replace other people typically.
What I want to see at this point are more screencasts, write-ups, anything really, that depict the entire process of how someone expertly wrangles these products to produce non-trivial features. There's AI influencers who make very impressive (and entertaining!) content about building uhhh more AI tooling, hello worlds and CRUD. There's experienced devs presenting code bases supposedly almost entirely generated by AI, who when pressed will admit they basically throw away all code the AI generates and are merely inspired by it.
Single-shot prompt to full app (what's being advertised) rapidly turns to "well, it's useful to get motivated when starting from a blank slate" (ok, so is my oblique strategies deck but that one doesn't cost 200 quid a month).
This is just what I observe on HN, I don't doubt there's actual devs (rather than the larping evangelist AI maxis) out there who actually get use out of these things but they are pretty much invisible. If you are enthusiastic about your AI use, please share how the sausage gets made!
Important: there is a lot of human coding, too. I almost always go in after an AI does work and iterate myself for awhile, too.
Some people like to think for a while (and read docs) and just write it right at the first go. Some people like to build slowly and get a sense of where to go at each steps. But in all of those steps, there’s an heavy factor of expertise needed from the person doing the work. And this expertise does not comes for free.
I can use agentic workflow fine and generate code like any other. But the process is not enjoyable and there’s no actual gain. Especially in an entreprise settings where you’re going to use the same stack for years.
this is definitely closer to what I had in mind but it's still rather useless because it just shows what winning the lottery is like. what I am really looking for is neither the "Claude oneshot this" nor the "I gave up and wrote everything by hand" case but a realistic, "dirty" day-to-day work example. I wouldn't even mind if it was a long video (though some commentary would be nice in that case).
This is very similar to Tesla's FSD adoption in my mind.
For some (me), it's amazing because I use the technology often despite its inaccuracies. Put another way, it's valuable enough to mitigate its flaws.
For many others, it's on a spectrum between "use it sometimes but disengage any time it does something I wouldn't do" and "never use it" depending on how much control they want over their car.
In my case, I'm totally fine handing driving off to AI (more like ML + computer vision) most times but am not okay handing off my brain to AI (LLMs) because it makes too many mistakes and the work I'd need to do to spot-check them is about the same as I'd need to put in to do the thing myself.
The simplest explanation is that most of us are code monkeys reinventing the same CRUD wheel over and over again, gluing things together until they kind of work and calling it a day.
"developers" is such a broad term that it basically is meaningless in this discussion
or, and get this, software development is an enormous field with 100s of different kinds of variations and priorities and use cases.
lol.
another option is trying to convince yourself that you have any idea what the other 2,000,000 software devs are doing and think you can make grand, sweeping statements about it.
there is no stronger mark of a junior than the sentiment you're expressing
Well I know for a fact there are more code monkeys than rocket scientists working on advanced technologies. Just look at job offers really...
Anyone with any kind of experience in the industry should be able to tell that so idk where you're going with your "junior" comment. Technically I'm a senior in my company and I'm including myself in the code monkey category, I'm not working on anything revolutionary, as most devs are, just gluing things together, probably things that have been made dozens of times before and will be done dozens of time later... there is no shame in that, it's just the reality of software development. Just like most mechanics don't work on ferraris, even if mechanics working on ferraris do exist.
From my friends, working in small startups and large megacorps, no one is working on anything other than gluing existing packages together, a bit of es, a bit of postgres, a bit of crud, most of them worked on more technical things while getting their degrees 15 years ago than they are right now... while being in the top 5% of earners in the country. 50% of their job consist of bullshitting the n+1 to get a raise and some other variant of office politics
> I think what we should really ask ourselves is: “Why do LLM experiences vary so much among developers?”
Some possible reasons:
* different models used by different folks, free vs paid ones, various reasoning effort, quantizations under the hood and other parameters (e.g. samplers and temperature)
* different tools used, like in my case I've found Continue.dev to be surprisingly bad, Cline to be pretty decent but also RooCode to be really good; also had good experiences with JetBrains Junie, GitHub Copilot is *okay*, but yeah, lots of different options and settings out there
* different system prompts, various tool use cases (e.g. let the model run the code tests and fix them itself), as well as everything ranging from simple and straightforward codebases that are dime a dozen out there (and in the training data), vs something genuinely new that would trip up both your average junior dev, as well as the LLMs
* open ended vs well specified tasks, feeding in the proper context, starting new conversations/tasks when things go badly, offering examples so the model has more to go off of (it can predict something closer to what you actually want), most of my prompts at this point are usually multiple sentences, up to a dozen, alongside code/data examples, alongside prompting the model to ask me questions about what I want before doing the actual implementation
* also sometimes individual models produce output for specific use cases badly, I generally rotate between Sonnet 4.5, Gemini Pro 2.5, GPT-5 and also use Qwen 3 Coder 480B running on Cerebras for the tasks I need done quickly and that are more simple
With all of that, my success rate is pretty great and the statement about the tech not being able to "...barely follow a simple instruction" holds untrue. Then again, most of my projects are webdev adjacent in mostly mainstream stacks, YMMV.
> Then again, most of my projects are webdev adjacent in mostly mainstream stacks
This is probably the most significant part of your answer. You are asking it to do things for which there are a ton of examples of in the training data. You described narrowing the scope of your requests too, which tends to be better.
It's true though, they can't. It really depends on what they have to work with.
In the fixed world of mathematics, everything could in principle be great. In software, it can in principle be okay even though contexts might be longer. When dealing with new contexts in something like real life, but different-- such as a story where nobody can communicate with the main characters because they speak a different language, then the models simply can't deal with it, always returning to the context they're familiar with.
When you give them contexts that are different enough from the kind of texts they've seen, they do indeed fail to follow basic instructions, even though they can follow seemingly much more difficult instructions in other contexts.
Well we are all doing different tasks on different codebases too. It's very often not discussed, even though it's an incredibly important detail.
But the other thing is that, your expectations normalise, and you will hit its limits more often if you are relying on it more. You will inevitably be unimpressed by it, the longer you use it.
If I use it here and there, I am usually impressed. If I try to use it for my whole day, I am thoroughly unimpressed by the end, having had to re-do countless things it "should" have been capable of based on my own past experience with it.
> Well we are all doing different tasks on different codebases too. It's very often not discussed, even though it's an incredibly important detail.
Absolutely nuts I had to scroll down this far to find the answer.Totally agree.
Maybe it's the fact that every software development job has different priorities, stakeholders, features, time constraints, programming models, languages, etc. Just a guess lol
> I think what we should really ask ourselves is: “Why do LLM experiences vary so much among developers?”
My hypothesis is that developers work on different things and while these models might work very well for some domains (react components?) they will fail quickly in others (embedded?). So one one side we have developers working on X (LLM good at it) claiming that it will revolutionize development forever and the other side we have developers working on Y (LLM bad at it) claiming that it's just a fad.
I think this is right on, and the things that LLM excels at (react components was your example) are really the things that there's just such a ridiculous amount of training data for. This is why LLMs are not likely to get much better at code. They're still useful, don't get me wrong, but they 5x expectations needs to get reined in.
A breadth and depth of training data is important, but modern models are excellent at in-context learning. Throw them documentation and outline the context for what they're supposed to do and they will be able to handle some out-of-distribution things just fine.
I would love to see some detailed failure cases of people who used agentic LLMs and didn't make it work. Everyone is asking for positive examples, but I want to see the other side.
"expect an LLM to correctly produce the complex thing they have in mind"
My guess is that for some types of work people don't know what the complex thing they have in mind is ex ante. The idea forms and is clarified through the process of doing the work. For those types of task there is no efficiency gain in using AI to do the work.
"Just start iterating in chunks alongside the LLM".
For those types of tasks it probably takes the same amount of time to form the idea without AI as with AI, this is what Metr found in its study of developer productivity.
That study design has some issues. But let's say it takes me the same amount of time, the agentic flow is still beneficial to me. It provides useful structure, helps with breaking down the problem. I can rubber duck, send off web research tasks, come back to answer questions, etc., all within a single interface. That's useful to me, and especially so if you have to jump around different projects a lot (consultancy). YMMV.
> Why do LLM experiences vary so much among developers?
The question assumes that all developers do the same work. The kind of work done by an embedded dev is very different from the work of a front-end dev which is very different from the kind of work a dev at Jane Street does. And even then, devs work on different types of projects: greenfield, brownfield and legacy. Different kind of setups: monorepo, multiple repos. Language diversity: single language, multiple languages, etc.
Devs are not some kind of monolith army working like robots in a factory.
We need to look at these factors before we even consider any sort of ML.
I would say they can't reliably deliver simple work. They often can, but reliability, to me, means I can expect it to work every time. Or at least as much as any other software tool, with failure rates somewhere in the vicinity of 1 in 10^5, 1 in 10^6. LLMs fail on the order of 1 in 10 times for simple work. And rarely succeed for complex work.
That is not reliable, that's the opposite of reliable.
One has to look at the alternatives. What would i do if not use the LLM to generate the code? The two answers are “coding myself”, “asking an other dev to code it”. And neither of those approach anywhere a 10^5 failure rate. Not even close.
>I think what we should really ask ourselves is: “Why do LLM experiences vary so much among developers?”
Two of the key skills needed for effective use of LLMs are writing clear specifications (written communication), and management, skills that vary widely among developers.
There’s no clearer specifications than code, and I can manage my toolset just fines (lines of config, alias, and what not to make my job easier). That allowed me to deliver good results fine and fast without worrying if it’s right this time
I've known lots of people that don't know how to properly use Google, and Google has been around for decades. "You're using it wrong" is partially true, I'd say more something like "it is a new tool that changes very quickly, you have to invest a lot of time to learn how to properly use it, most people using it well have been using it a lot over the last two years, you won't catch up in an afternoon. Even after all that time, it may not be the best tool for every job" (proof on the last point being Karpathy saying he wrote nanochat mostly by hand).
It is getting easier and easier to get good results out of them, partially by the models themselves improving, partially by the scaffolding.
> non-developer public is mostly convinced that AI is actually artificial intelligence, rather than a very sophisticated next-word predictor
This is a false dichotomy that assumes we know way more about intelligence than we actually do, and also assumes than what you need to ship lots of high quality software is "intelligence".
>While claiming that an LLM cannot follow a simple instruction sounds, at best, very unlikely, it remains true that these models cannot reliably deliver complex work.
"reliably" is doing a lot of work here. If it means "without human guidance" it is true (for now), if it means "without scaffolding" it is true (also for now), if it means "at all" it is not true, if it means it can't increase dev productivity so that they ship more at the same level of quality, assuming a learning period, it is not true.
I think those conversations would benefit a lot from being more precise and more focused, but I also realize that it's hard to do so because people have vastly different needs, levels of experience, expectations ; there are lots of tools, some similar, some completely different, etc.
To answer your question directly, ie “Why do LLM experiences vary so much among developers?”: because "developer" is a very very very wide category already (MISRA C on a car, web frontend, infra automation, medical software, industry automation are all "developers"), with lots of different domains (both "business domains" as in finance, marketing, education and technical domains like networking, web, mobile, databases, etc), filled with people with very different life paths, very different ways of working, very different knowledge of AIs, very different requirements (some employers forbid everything except a few tools), very different tools that have to be used differently.
It’s because people are using different tiers of AI and different models. And many people don’t stick with it long enough to get a more nuanced outlook of AI.
Take Joe. Joe sticks with AI and uses it to build an entire project. Hundreds of prompts. Versus your average HNer who thinks he’s the greatest programmer in the company and thinks he doesn’t need AI but tries it anyway. Then AI fails and fulfills his confirmation bias and he never tries it again.
That's where I stand now. I use LLMs in some agentic coding way 10h/day to great avail. If someone doesn't see or realize the value, then that's their loss.
Because the hype cycle on the original AI wave was fading so folks needed something new to buzz about to keep the hype momentum going. Seriously, that’s the reason.
Folks aren’t seeing measurable returns on AI. Lots written about this. When the bean counters show up, the easiest way to get out of jail is to say “Oh X? Yeah that was last year, don’t worry about it… we’re now focused on Y which is where the impact will come from.”
Every hype cycle goes through some variation of this evolution. As much as folks try to say AI is different it’s following the same very predictable hype cycle curve.
As a scientist there is a ton of boiler plate code that is just slightly different enough for every data set I need to write it myself each time. So coding agents solve a lot of that. At least until you are halfway through something and you realize Claude didn’t listen when you wrote 5 times in capital letters NEVER MAKE UP DATA YOU ARE NOT ALLOWED TO USE np.random IN PLACE OF ACTUAL DATA. It’s all kind of wild because when it works it’s great and when it doesnt there’s no failure state. So if I put on my llm marketing hat I guess the solution is to have an agent that comes behind the coding agent that checks to see if it does its job. We can call it the Performance Improvement Plan Agent (PIPA). PIPAs allow real time monitoring of coding agents to make sure they are working and not slacking off allowing for HR departments and management teams to have full control over their AI employees. Together we will move into the future.
There are widely divergent views here. It'd be hard to have a good discussion unless people mention what tasks they're attempting and failing at. And we'll also have to ask if those tasks (or categories) are representative of mainstream developer effort.
Without mentioning what the LLMs are failing or succeeding at, it's all noise.
- experience wrangling these systems ("I touched ChatGPT once" vs "I spend 12h/day in Claude Code")
And there's more, is the engineer working on a single codebase for 10 years or do they jump around various projects all the time. Is it more greenfield, or legacy maintenance. Is it some frontier never-before-seen research project or CRUD? And so on.
For me, a big issue is that the performance of the AI tools varies enormously for different tasks. And it's not that predictable when it will fail, which does lead to quite a bit of wasted time. And while having more experience prompting a particular tool is likely to help here, it's still frustrating.
There is a bit of overlap for the stuff you use agents and the stuff that AI is good at. Like generating a bunch of boilerplate for a new thing from scratch. That makes the agent mode more convenient for me to interact with AI for the stuff it's useful in my case. But my experience with these tools is still quite limited.
When it works well you both normalise your expectations, and expand your usage, meaning you will hit its limits, and be even more disappointed when it fails at something you've seen it do well before.
> The replies are all a variation of: "You're using it wrong"
I don't know what you are trying to say with your post. I mean, if two persons feed their prompts to an agent and while one is able to reach their goals the other fails to achieve anything, would it be outlandish to suggest one of them is using it right whereas the other is using it wrong? Or do you expect the output to not reflect the input at all?
And yours is also "you are using it wrong" in the spirit.
Are they doing the same thing? Are they trying to achieve the same goals, but fail because one is lacking some skill?
One person may be someone who needs a very basic thing like creating a script to batch-rename his files, another one may be trying to do a massive refactoring.
And while the former succeeds, the latter fails. Is it only because someone doesn't know how to use agentic AI, or because agentic AI is simply lacking?
And some more variations that, in my anecdotal experience make or break the agentic experience:
* strictness of the result - a personal blog entry vs a complex migration to reform a production database of a large, critical system
* team constraints - style guides, peer review, linting, test requirements, TDD, etc
* language, frameworks - quick node-js app vs a java monolyth e.g.
* legacy - a 12+ year Django app vs a greenfield rust microservice
* context - complex, historical, nonsensical business constraints and flows vs a simple crud action
* example body - a simple crud TODO in PHP or JS, done a million times vs a event-sourced, hexagonal architecrtured, cryptographical signing system for govt data.
Of course the output reflects the input, that's why it's a bad idea to let the LLM run in a loop without constraints, it's simple maths, if something is 99% accurate, after 5 times is 95% accurate, after 10 steps it's about 90% accurate, after 100 times it's about 36% accurate.
For LLMs to be effective, you (or something else) needs to constantly find the errors and fix it.
I've had good experience getting a different LLM perform a technical review, then feed that back to the primary LLM but tell it to evaluate the feedback rather than just blindly accepting it.
You still have to have a hand on the wheel, but it helps a fair bit.
I’ve seen LLM catch and fix their own mistakes and literally tell me they were wrong and that they are fixing their self made wrong mistake. This analogy is therefore not accurate as error rate can actually decrease over time.
If we assume that each action has 99% success rate, and when it fails, it has 20% chance of recovery, and if the math here by gemini 2.5 pro is correct, that means the system will tend towards 95% chance of success.
===
In equilibrium, the probability of leaving the Success state must equal the probability of entering it.
(Probability of being in S) * (Chance of leaving S) = (Probability of being in F) * (Chance of leaving F)
Let P(S) be the probability of being in Success and P(F) be the probability of being in Failure.
P(S) * 0.01 = P(F) * 0.20
Since P(S) + P(F) = 1, we can say P(F) = 1 - P(S). Substituting that in:
In my experience it depends on which way the wind is blowing, random chance, and a lot of luck.
For example, I was working on the same kind of change across a few dozen files. The prompt input didn't change, the work didn't change, but the "AI" got it wrong as often as it got it right. So was I "using it wrong" or was the "AI" doing it wrong half the time? I tried several "AI" offerings and they all had similar results. Ultimately, the "AI" wasted as much time as it saved me.
I've certainly gotten a lot of value from adapting my development practices to play to LLM's strengths and investing my effort where they have weaknesses.
"You're using it wrong" and "It could work better than it does now" can be true at the same time, sometimes for the same reason.
I find it quite funny that one of the users actually posted a fully AI-generated reply (dramatically different grammar and structure than their other posts).
People want predictability from LLMs, but these things are inherently stochastic, not deterministic compilers. What’s working right now isn’t "prompting better," it’s building systems that keep the LLM on track over time: logging, retrying, verifying outputs, giving it context windows that evolve with the repo, etc.
That’s why we’ve been investing so much in multi-agent supervision and reproducibility loops at gobii.ai. You can’t just "trust" the model; you need an environment where it’s continuously evaluated, self-corrects, and coordinates with other agents (and humans) around shared state. Once you do that, it stops feeling like RNG and starts looking like an actual engineering workflow, distributed between humans and LLMs.
When I asked Claude "AI" to count the number of text file lines missing a given initial sub-string, it gave an improbably exaggerated result. When I challenged this, it replied "You are right! Let me try again this time without splitting long lines."
I recommend you check out Andrej Karpathy’s 2 YouTube videos on how LLMs work (they are easy to find, but be forewarned they are long!). Once one digs in deeper it becomes clear why a model today might fail at the task you described.
Generally speaking, one of the behaviors I see in my day to day work leading engineers is that they often attempt to apply agentic coding tools to problema that don’t really benefit from them.
Like OP in the link, I'm confused too. And I use LLMs for coding every day! With precise prompts, function signatures provided, only using it for problems I know are solved [by others] etc.
The problem in this case is that LLMs are bad with golang, I don't write go, I am guessing from my experience with kotlin. I mainly use kotlin (rest apis) and LLMs are often bad at writing it. They e.g. confuse mockk and mockito functions and then agent spiral into a never ending loop of guessing what's wrong and trying to fix it in 5 different ways. Instead I use only chat, validate every output and point out errors they introduce.
On the other hand colleagues working with react and next have better experience with agents.
While I agree with the sentiment of not just letting it run free on the whole codebase and do what it wants, I still have good experience with letting it do small tasks one at a time, guided by me. Coding ability of models has really improved over the last few months itself and I seem to be clearing less and less AI-generated code mess than I was 5 months ago.
It's got a lot to do with problem framing and prompt imo.
My guess is that the reason why AI works bad for some people is the same reason why a lot of people make bad managers / product owners / team leads. Also the same reason why onboarding is atrocious in a lot of companies ("Here's your login, here's a link to the wiki that hasn't been updated since 2019, if you have any questions ask one of your very busy co-workers, they are happy to help").
You have to very good at writing tasks while being fully aware of what the one executing it knows and doesn't know. What agents can infer about a project themselves is even more limited than their context, so it's up to you to provide it. Most of them will have no or very limited "long-term" memory.
I've had good experiences with small projects using the latest models. But letting them sift through a company repo that has been worked on by multiple developers for years and has some arcane structures and sparse documentation - good luck with that. There aren't many simple instructions to be made there. The AI can still save you an hour or two of writing unit tests if they are easy to set up and really only need very few source files as context.
But just talking to some people makes it clear how difficult the concept of implicit context is. Sometimes it's like listening to a 4 year old telling you about their day. AI may actually be better at comprehending that sort of thing than I am.
One criticism I do have of AI in its current state is that it still doesn't ask questions often enough. One time I forgot to fill out the description of a task - but instead of seeing that as a mistake it just inferred what I wanted from the title and some other files and implemented it anyway. Correctly, too. In that sense it was the exact opposite of what OP was complaining about, but personally I'd rather have the AI assume that I'm fallible instead of confidently plowing ahead.
> what prompted this post? ... and they either omit one aspect of it or forget to update one part or the other. So it makes me wonder what the buzz of this agentic thing is really coming from
Because most of the time it does work? Especially when you learn how to prompt it clearly?
Yes it messes up sometimes. Just like people mess up sometimes. And it messes up in different ways from people.
I feel like I keep repeating this: just because a tool isn't perfect doesn't mean it isn't still valuable. Tools that work 90% of the time can still be a big help in end, as long as it's easy to tell when they fail and then you try another way.
I have the exact same question, what is hype all about when models can't do simple things. You prompt the model with generate one unit test for function and it somehow always generate more then one. (Just to start with most simple instruction)
I just feel that models are currently not up to speed with experienced engineers where it takes less time to develop something then to instruct model to do it. It is only usefull for boring work.
This is not to say that these tools didn't created oportunities to create new stuff, it is just that the hype overestimates the usefullnes of the tools so they can sell them better just like all other things.
i agree, these tools are usefull. i only oppose agresive marketing that llm is solution for everything. it is just a tool which has its use case, but to me it seems that it is not optimal for use cases that it is advertised.
i work on agentic systems and they can be good if agent has a bit-sized chuck of work it needs to do. problme with the coding agents is that for every more complex thing you will need to write a big prompt which is sometimes counter productive and it seems to me that user in cursor thread is pointing in that direction.
I love how the proposed solution is to essentially gaslighting the model to think that it's an expert programmer and then specify and re-specify the prompt until the solution is essentially inefficient pseudocode. Now we are in a world where amateur coders still cannot code or can't learn from their mistakes while experts are essentially JIRA ticket outsourcing specialists.
I don't think the models are dumb anymore, codex with gpt5 and claude code can design and build complex systems. The only thing is these models work great on greenfield projects. Legacy projects design evolves over a number of years and LLMs have hard time understanding those unwritten project design decisions
FWIW all my coding with LLMs is very hands-on. What I've ended up doing with LLMs is something like the following:
1. New conversation. Describe at a high level what change I want made. Point out the relevant files for the LLM to have context. Discuss the overall design with the LLM. At the end of that conversation, ask it to write out a summary (including relevant files to read for context next time) in an "epic" document in llm/epics/. This will almost always have several steps, listed in the document.
Then I review this and make sure it's in line with what I want.
2. New conversation. We're working on @llm/epics/that_epic.md. Please read the relevant files for context. We're going to start work on step N. Let me know if you have any questions; when you're ready, sketch out a detailed plan of implementation.
I may need to answer some questions or help it find more context; then it writes a plan. I review this plan and make sure it's in line with what I want.
3. New conversation. We're working on @llm/epics/that_epic.md. We're going to start implementing step N. Let me know if you have any questions; when you're ready, go ahead and start coding.
Monitor it to make sure it doesn't get stuck. Any time it starts to do something stupid or against the pattern of what I'd like -- from style, to hallucinating (or forgetting) a feature of some sub-package -- add something to the context files.
Repeat until the epic is done.
If this sounds like a lot of work, it is. As xkcd's "Uncomfortable Truths Well" said, "You will never find a programming language that frees you from the burden of clarifying your ideas." LLMs don't fundamentally change that dynamic. But they do often come up with clever solutions to problems; their "stupid questions" often helps me realize how unclear my thinking is; they type a lot faster, and they look up documentation a lot faster too.
Sure, they make a bunch of frustrating mistakes when they're new to the project; but if every time they make a patterned mistake, you add that to your context somehow, eventually these will become fewer and fewer.
It feels to me that the OP on the forum expects this to work: "read this existing function, then read my mind and do stuff" (probably followed by "do better").
It still takes a lot of practice to get good at prompting, though.
After so many months, Gemini pro still shits the bed after failing to update a file several times. I'd expect more from the culmination of human knowledge.
Management thinks a crutch can effectively replace people massively in sensitive knowledge work. When that crutch starts making errors that cost those businesses millions, or billions, well, hopefully management who implemented all that will get fired...
Yes, LLM are useful, but they are even less trustworthy than real humans, and one needs actual people to verify their output, so when agents write 100K lines of code, they'll make mistakes, extremely subtle ones, and not the kind of mistake any human operator would make.
The most powerful tools are usually renowned to have the most arcane user interfaces.
Xkcd's "Uncomfortable Truths Well" said, "You will never find a programming language that frees you from the burden of clarifying your ideas." LLMs don't fundamentally change that dynamic.
Because there is a lot of money tied up in AI now, in a way that doesn't just reek like a bubble waiting to implode but even more stinks like a bunch of what used to be called "wash trading" [1]. And that's just the money side.
The "social kool-aid" side is even worse. A lot of very rich and very influential people have bet their career on AI - especially large companies who just outright fired staff to be replaced both by actual AI and "Actually Indians" [2] and are now putting insane pressure on their underlings and vendors to make something that at least looks on the surface like the promised AI dreams of getting rid of humans.
Both in combination explains why there is so much half-baked barely tested garbage (or to use the term du jour: slop) being pushed out and force fed to end users, despite clearly not being ready for prime time. And on top of that, the Pareto principle also works for AI - most of what's being pushed is now "good enough" for 80%, and everyone is trying to claim and sell that the missing 20% (that would require a lot of work and probably a fundamentally new architecture other than RNG-based LLMs) don't matter.
What I recently experienced on asking for a string manipulation routine that follows very arbitrary logic (for a long existing file format) that it forgots things like UTF string handling (in general, but also its subtle details requiring second round), its own code replacing special characters with escape sequence can be cut in half in limited width fileds (being an input for the function), considers some aspects of the specification document while omiting the others. Needs heavy supervision in the details and constant adjustments.
Yet, it makes the bulk of the work. Saves brain energy, that goes into the edge cases then. The overall time is the same, it is just the result could become more robust in the end. Only with good supervision! (which has better chance when we are not worn out with the tedious heavy lifting part)
But the one undebatable benefit is that the user can feel the smartest person in the whole wide world having so 'excellent questions', and 'knowing the topic like a pro', or being 'fantastic to spot such subtle details'. Anyone feel inadequate should use an agentic AI to boost self morale! (well, only if the person does not get nauseous from that thick flattering)
I am going to try and make it a habit to post this request on all LLM Coding questions -
Can we please make it a point to share the following information when we talk about experiences with code bots?
1) Language - gives us an idea if the language has a large corpus of examples or not
2) Project - what were you using it for?
3) Level of experience - neophyte coder? Dunning Krueger uncertainty? Experience in managing other coders? Understand project implementation best practices ?
From what I can tell/suspect, these 3 features are the likely sources of variation in outcomes.
I suspect level of experience is doing significant heavy lifting, because more experienced devs approach projects in a manner that avoids pitfalls from the get go.
Once you figure out how to get your model to go find the context it needs (for me this usually comes down to really good error messages that feel a bit like a prompt injection attack) and you figure out how to keep the tasks small and uniform-ish such that a passing test for a previous (supervised) task becomes a reason that that output can now be used as context for how to complete the next (unsupervised) task, agents can be pretty darn reliable.
Maybe 50% of the problems we solve are repetitive enough for this to make sense, and 50% of those are uniform enough that models are overkill, and 50% of those are too small to be worth investing in the necessary scaffolding. But if you're looking at a problem that's in that magical 12.5%, a property constrained agent is absolutely the way to go.
Marketing is being done really well in 2025, with brands injecting themselves into conversations on Reddit, LinkedIn, and every other public forum. [1]
CEOs, AI "thought leaders," and VCs are advertising LLMs as magic, and tools like v0 and Lovable as the next big thing. Every response from leaders is some variation of https://www.youtube.com/watch?v=w61d-NBqafM
On the ground, we know that creating CLAUDE.md or cursorrules basically does nothing. It’s up to the LLM to follow instructions, and it does so based on RNG as far as I can tell. I have very simple, basic rules set up that are never followed. This leads me to believe everyone posting on that thread on Cursor is an amateur.
Beyond this, if you’re working on novel code, LLMs are absolutely horrible at doing anything. A lot of assumptions are made, non-existent libraries are used, and agents are just great at using tokens to generate no tangible result whatsoever.
I’m at a stage where I use LLMs the same way I would use speech-to-text (code) - telling the LLM exactly what I want, what files it should consider, and it adds _some_ value by thinking of edge cases I might’ve missed, best practices I’m unaware of, and writing better grammar than I do.
Edit:
[1] To add to this, any time you use search or Perplexity or what have you, the results come from all this marketing garbage being pumped into the internet by marketing teams.
> if you’re working on novel code, LLMs are absolutely horrible
This is spot on. Current state-of-the-art models are, in my experience, very good at writing boilerplate code or very simple architecture especially in projects or frameworks where there are extremely well-known opinionated patterns (MVC especially).
What they are genuinely impressive at is parsing through large amounts of information to find something (eg: in a codebase, or in stack traces, or in logs). But this hype machine of 'agents creating entire codebases' is surely just smoke and mirrors - at least for now.
> at least for now.
I know I could be eating my words, but there is basically no evidence to suggest it ever becomes as exceptional as the kingmakers are hoping.
Yes it advanced extremely quickly, but that is not a confirmation of anything. It could just be the technology quickly meeting us at either our limit of compute, or it's limit of capability.
My thinking here is that we already had the technologies of the LLMs and the compute, but we hadn't yet had the reason and capital to deploy it at this scale.
So the surprising innovation of transformers did not give us the boost in capability itself, it still needed scale. The marketing that enabled the capital, that enables that scale was what caused the insane growth, and capital can't grow forever, it needs returns.
Scale has been exponential, and we are hitting an insane amount of capital deployment for this one technology that, has yet to prove commercially viable at the scale of a paradigm shift.
Are businesses that are not AI based, actually seeing ROI on AI spend? That is really the only question that matters, because if that is false, the money and drive for the technology vanishes and the scale that enables it disappears too.
> Yes it advanced extremely quickly,
It did but it's kinda stagnated now especially on the LLM front. The time when ever week a groundbreaking model came out is over for now. Later revisions of existing models, like GPT5 and llama4 have been underwhelming.
I’m curious what you are expecting when you say progress has stagnated?
GPT5 may have been underwhelming to _you_. Understand that they're heavily RLing to raise the floor on these models, so they might not be magically smarter across the board, there are a LOT of areas where they're a lot better that you've probably missed because they're not your use case.
every time i say "the tech seems to be stagnating" or "this model seems worse" based on my observations i get this response. "well, it's better for other use cases." i have even heard people say "this is worse for the things i use it for, but i know it's better for things i don't use it for."
i have yet to hear anyone seriously explain to me a single real-world thing that GPT5 is better at with any sort of evidence (or even anecdote!) i've seen benchmarks! but i cannot point to a single person who seems to think that they are accomplishing real-world tasks with GPT5 better than they were with GPT4.
the few cases i have heard that venture near that ask may be moderately intriguing, but don't seem to justify the overall cost of building and running the model, even if there have been marginal or perhaps even impressive leaps in very narrow use cases. one of the core features of LLMs is they are allegedly general-purpose. i don't know that i really believe a company is worth billions if they take their flagship product that can write sentences, generate a plan, follow instructions and do math and they are constantly making it moderately better at writing sentences, or following instructions, or coming up with a plan and it consequently forgets how to do math, or becomes belligerent, or sycophantic, or what have you.
to me, as a user with a broad range of use cases (internet search, text manipulation, deep research, writing code) i haven't seen many meaningful increases in quality of task execution in a very, very long time. this tracks with my understanding of transformer models, as they don't work in a way that suggests to me that they COULD be good at executing tasks. this is why i'm always so skeptical of people saying "the big breakthrough is coming." transformer models seem self-limiting by merit of how they are designed. there are features of thought they simply lack, and while i accept there's probably nobody who fully understands how they work, i also think at this point we can safely say there is no superintelligence in there to eke out and we're at the margins of their performance.
the entire pitch behind GPT and OpenAI in general is that these are broadly applicable, dare-i-say near-AGI models that can be used by every human as an assistant to solve all their problems and can be prompted with simple, natural language english. if they can only be good at a few things at a time and require extensive prompt engineering to bully into consistent behavior, we've just created a non-deterministic programming language, a thing precisely nobody wants.
Claude Sonnet 4.5 is _way_ better than previous sonnets and as good as Opus for the coding and research tasks I do daily.
I rarely use Google search anymore, both because llms got that ability embedded and the chatbots are good at looking through the swill search results have become.
"it's better at coding" is not useful information, sorry. i'd love to hear tangible ways it's actually better. does it still succumb to coding itself in circles, taking multiple dependencies to accomplish the same task, applying inconsistent, outdated, or non-idiomatic patterns for your codebase? has compliance with claude.md files and the like actually improved? what is the round trip time like on these improvements - do you have to have a long conversation to arrive at a simple result? does it still talk itself into loops where it keeps solving and unsolving the same problems? when you ask it to work through a complex refactor, does it still just randomly give up somewhere in the middle and decide there's nothing left to do? does it still sometimes attempt to run processes that aren't self-terminating to monitor their output and hang for upwards of ten minutes?
my experience with claude and its ilk are that they are insanely impressive in greenfield projects and collapse in legacy codebases quickly. they can be a force multiplier in the hands of someone who actually knows what they're doing, i think, but the evidence of that even is pretty shaky: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
the pitch that "if i describe the task perfectly in absolute detail it will accomplish it correctly 80% of the time" doesn't appeal to me as a particularly compelling justification for the level of investment we're seeing. actually writing the code is the simplest part of my job. if i've done all the thinking already, i can just write the code. there's very little need for me to then filter that through a computer with an overly-verbose description of what i want.
as for your search results issue: i don't entirely disagree that google is unusable, but having switched to kagi... again, i'm not sure the order of magnitude of complexity of searching via an LLM is justified? maybe i'm just old, but i like a list of documents presented without much editorializing. google has been a user-hostile product for a long time, and its particularly recent quality collapse has been well-documented, but this seems a lot more a story of "a tool we relied on has gotten measurably worse" and not a story of "this tool is meaningfully better at accomplishing the same task." i'll hand it to chatgpt/claude that they are about as effective as google was at directing me to the right thing circa a decade ago, when it was still a functional product - but that brings me back to the point that "man, this is a lot of investment and expense to arrive at the same result way more indirectly."
You asked for a single anecdote of llms getting better at daily tasks. I provided two. You dismissed them as not valuable _to you_.
It’s fine that your preferences aren’t aligned such that you don’t value the model or improvements that we’ve seen. It’s troubling that you use that to suggest there haven’t been improvements.
> Yes it advanced extremely quickly, but that is not a confirmation of anything. It could just be the technology quickly meeting us at either our limit of compute, or it's limit of capability.
To comment om this, because its the most common counter argument. Most technology has worked in steps. We take a step forward, then iterate on essentially the same thing. It's very rare we see order of magnitude improvement on the same fundamental "step".
Cars were quite a step forward from donkeys, but modern cars are not that far off from the first ones. Planes were an amazing invention, but the next model of plane is basically the same thing as the first one.
I agree, I think we are in the latter phase already. LLMs were a huge leap in machine learning, but everything after has been steps on top + scale.
I think we would need another leap to actually meet the markets expectations on AI. The market is expecting AGI, but I think we are probably just going to do incremental improvements for language and multi modal models from here, and not meet those expectations.
I think the market is relying on something that doesn't currently exist to become true, and that is a bit irrational.
Transformers aren't it, though. We need a new fundamental architecture and, just like every step forward in AI that came before, when that happens is a completely random event. Some researcher needs to wake up with a brilliant idea.
The explosion of compute and investment could mean that we have more researchers available for that event to happen, but at the same time transformers are sucking up all the air in the room.
Several people hinted at the limits this technology was about to face, including training data and compute. It was obvious it had serious limits.
Despite the warnings, companies insisted on marketing superintelligence nonsense and magic automatic developers. They convinced the market with disingenous demonstrations, which, again, were called out as bullshit by many people. They are still doing it. It's the same thing.
The question in your last paragraph is not the only one that matters. Funding the technology at a material loss will not be off the table. Think about why.
Just tell us why you think funding at a loss at this scale is viable, don’t smugly assign homework
> I know I could be eating my words, but there is basically no evidence to suggest it ever becomes as exceptional as the kingmakers are hoping.
??? It has already become exceptional. In 2.5 years (since chatgpt launched) we went from "oh, look how cute this is, it writes poems and the code almost looks like python" to "hey, this thing basically wrote a full programming language[1] with genz keywords, and it mostly works, still has some bugs".
I think the goalpost moving is at play here, and we quickly forget how 1 year makes a huge difference (last year you needed tons of glue and handwritten harnesses to do anything - see aider) and today you can give them a spec and get a mostly working project (albeit with some bugs), 50$ later.
[1] - https://github.com/ghuntley/cursed
I feel like the invention of MCP was a lot more instrumental to that than model upgrades proper. But look at it as a good thing, if you will: it shows that even if models are plateauing, there's a lot of value to unlock through the tooling.
> it shows that even if models are plateauing,
The models aren't plateauing (see below).
> invention of MCP was a lot more instrumental [...] than model upgrades proper
Not clear. The folks at hf showed that a minimal "agentic loop" in 100 LoC [1] that gives the agent "just bash access" still got very close to SotA with all the bells and whistles (and surpassed last year models w/ handcrafted harnesses).
[1] - https://github.com/SWE-agent/mini-swe-agent
I mean, that's still proving the point that tooling matters. I don't think his point was "MCP as a technology is extraordinary" because it's not.
I don't disagree with you on the technology, but mostly my comment is about what the market is expecting. With such a huge capex expenditure it is expecting a huge returns. Given AI has not proven consistent ROI generally for other enterprises (as far as I know), they are hoping for something better than what is right now and they are hoping for it to happen before the money runs out.
I am not saying it's impossible, but there is no evidence that the leap in technology to reach wild profitability (replacing general labour) such investment desires is just around the corner either.
To phrase this another way, using old terms: We seem to be approaching the uncanny valley for LLMs, at which point the market overall will probably hit the trough of disillusionment.
After 3 years, I would like to see pathways.
Let say we found a company that already realized 5-10% of savings in the first step. Now, based on this we might be able to map out the path to 25-30% savings in 5% steps for example.
I personally haven’t seen this, but I might have missed it as well.
Three years? One year ago I tried using LLMs for coding and found it to be more trouble than it was worth, no benifit in time spent or effort made. It's only within the past several months that this gas changed, IMHO.
It doesn't really matter what the market is expecting at this point, the president views AI supremacy as non-negotiable. AI is too big to fail.
It’s true, but not just the presidency. The whole political class is convinced that this is the path out of all their problems.
...Is it the whole political class?
Or is it the whole political party?
That there is a bubble is absolutely certain. If for no other reason, than because investors don't understand the technology and don't know which companies are for real and which are essentially scams, they dump money into anything with the veneer of AI and hope some of it sticks. We're replaying the dotcom bubble, a lot of people are going to get burned, a lot of companies will turn out to be crap. But at the end of the dotcom crash we had some survivors standing above the rest and the whole internet thing turned out to have considerable staying power. I think the same will happen with AI, particularly agentic coding tools. The technology is real and will stick with us, even after the bubble and crash.
I didn't realize generating the gen-z programming language was a goalpost in the first place
I have had LLMs write entire codebases for me, so it's not like the hype is completely wrong. It's just that this only works if what you want is "boring", limited in scope and on a well-trodden path. You can have an LLM create a CRUD application in one go, or if you want to sort training data for image recognition you can have it generte a one-off image viewer with shortcuts tailored to your needs for this task. Those are powerful things and worthy of some hype. For anything more complex you very quickly run into limits and the time and effort to do it with an LLM quickly approaches the time and effort required to do it by hand.
They're powerful, but my feeling is that largely you could do this pre-LLM by searching on Stack Overflow or copying and pasting from the browser and adapting those examples, if you knew what you were looking for. Where it adds power is adapting it to your particular use case + putting it in the IDE. It's a big leap but not as enormous a leap as some people are making out.
Of course, if you don't know what you are looking for, it can make that process much easier. I think this is why people at the junior end find it is making them (a claimed) 10x more productive. But people who have been around for a long time are more skeptical.
Why bother searching yourself? This is pre-LLM: https://github.com/drathier/stack-overflow-import
> Where it adds power is adapting it to your particular use case + putting it in the IDE. It's a big leap but not as enormous a leap as some people are making out.
To be fair, this is super, super helpful.
I do find LLMs helpful for search and providing a bunch of different approaches for a new problem/area though. Like, nothing that couldn't be done before but a definite time saver.
Finally, they are pretty good at debugging, they've helped me think through a bunch of problems (this is mostly an extension of my point above).
Hilariously enough, they are really poor at building MCP like stuff, as this is too new for them to have many examples in the training data. Makes total sense, but still endlessly amusing to me.
Doing it the old fashioned lazy way, copy-pasting snippets of code you search for on the internet and slightly modifying each one to fit with the rest of your code, would take me hours to achieve the kind of slop that claude code can one shot in five minutes.
Yeah yeah, call me junior or whatever, I have thick skin. I'm a lazy bastard and I no longer care about the art of the craft, I just want programs tailored to my tastes and agentic coding tools are by far the fastest way to get it. 10x doesn't even come close, it's more like 100x just on the basis of time alone. Effort? After the planning stage I kick back with video games while the tool works. Far better than 100x for effort.
i have seen so many people say that, but the app stores/package managers aren't being flooded with thousands of vibe coded apps, meanwhile facebook is basically ai slop. can you share your github? or a gist of some of these "codebases"
You seem critical of people posting AI slop on Facebook (so am I) but also want people to publish more AI slop software?
The AI slop software I've been making with Claude is intended for my own personal use. I haven't read most of the code and certainly wouldn't want to publish it under my own name. But it does work, it scratches my itches, fills my needs. I'm not going to publish the whole thing because that's a whole can of worms, but to hopefully satisfy your curiosity, here is the main_window.py of my tag-based file manager. It's essentially a CRUD application built with sqlite and pyside6. It doesn't do anything terribly adventurous, the most exciting it gets is keeping track of tag co-occurances so it can use naive Bayesian classifiers to recommend tags for files, order files by how likely they are to have a tag, etc.
Please enjoy. I haven't actually read this myself, only verified the behavior: https://paste.debian.net/hidden/c6a85fac
> "the app stores/package managers aren't being flooded with thousands of vibe coded apps"
The state of claude code presently is definitely good enough to churn out low effort shovelware. Insofar as that isn't evidently happening, I can only speculate about the reasons. In no order, it may be one or several of these reasons: Lots of developers feel threatened by the technology and won't give it a serious whirl. Non-developers are still stuck in the mindset of writing software being something they can't do. The general public isn't as aware of the existence of agentic coding tools as we on HN are. The appstores are being flooded with slop, as they always have been, and some of that slop is now AI slop, but doesn't advertise this fact, and the appstore algorithms generally do some work to suppress the visibility of slop anyway. Most people don't have good ideas for new software and don't have the reflex to develop new software to scratch their itches, instead they are stuck in the mentality of software consumers. Just some ideas..
The purpose of an LLM is not to do your job, it's to do enough to convince your boss to sack you and pay the LLM company some portion of your salary.
To that end, it doesn't matter if it works or not, it just has to demo well.
> Current state-of-the-art models are, in my experience, very good at writing boilerplate code or very simple architecture especially in projects or frameworks where there are extremely well-known opinionated patterns (MVC especially).
Which makes sense, considering the absolutely massive amount of tutorials and basic HOWTOs that were present in the training data, as they are the easiest kind of programming content to produce.
What is novel code?
Other than that, given the right context (the sdk doc for a unique hardware for eg) and a well organised codebase explained using CLAUDE.Md they work pretty well in filling out implementations. Just need to resist the temptation to prompt while the actual typing would take seconds.Yep, LLMs are basically at the "really smart intern" level. Give them anything complex or that requires experience and they crash and burn. Give them a small, well-specified task with limited scope and they do reasonably well. And like an intern they require constant check-ins to make sure they're on track.
Of course with real interns you end up at the end with trained developers ready for more complicated tasks. This is useful because interns aren't really that productive if you consider the amount of time they take from experienced developers, so the main benefit is producing skilled employees. But LLMs will always be interns, since they don't grow with the experience.
My experience is opposite to yours. I have had Claude Code fix issues in a compiler over the last week with very little guidance. Occasionally it gets frustrating, but most of the time Claude Code just churns through issue after issue, fixing subtle code generation and parser bugs with very little intervention. In fact, most of my intervention is tool weaknesses in terms of managing compaction to avoid running out of context at inopportune moments.
It's implemented methods I'd have to look up in books to even know about, and shown that it can get them working. It may not do much truly "novel" work, but very little code is novel.
They follow instructions very well if structured right, but you can't just throw random stuff in CLAUDE.md or similar. The biggest issue I've run into recently is that they need significant guidance on process. My instructions tends to focus on three separate areas: 1) debugging guidance for a given project (for my compiler project, that means things like "here's how to get an AST dumped from the compiler" and "use gdb to debug crashes" (it sometimes did that without being told, but not consistently; with the instructions it usually does tht), 2) acceptance criteria - this does need reiteration, 3) telling it to run tests frequently, make small, testable changes, and to frequently update a detailed file outlining the approach to be taken, progress towards it, and any outcomes of investigation during the work.
My experience is that with those three things in place, I can have Claude run for hours with --dangerously-skip-permissions and only step in to say "continue" or do a /compact in the middle of long runs, with only the most superficial checks.
It doesn't always provide perfect code every step. But neither do I. It does however usually move in the right direction every step, and has consistently produced progress over time with far less effort on my behalf.
I wouldn't have it start from scratch without at least some scaffolding that is architecturally sound yet, but it can often do that too, though that needs review before it "locks in" a bad choice.
I'm at a stage where I'm considering harnesses to let Claude work on a problem over the course of days without human intervention instead of just tens of minutes to hours.
> My experience is opposite to yours.
But that is exactly the problem, no?
It is like, when you need some prediction (e.g. about market behavior), knowing that somewhere out there there is a person who will make the perfect one. However, instead of your problem being to make the prediction, now it is how to find and identify that expert. Is that type of problem that you converted yours into any less hard though?
I too had some great minor successes, the current products are definitely a great step forward. However, every time I start anything more complex I never know in advance if I end up with utterly unusable code, even after corrections (with the "AI" always confidently claiming that now it definitely fixed the problem), or something usable.
All those examples such as yours suffer from one big problem: They are selected afterwards.
To be useful, you would have to make predictions in advance and then run the "AI" and have your prediction (about its usefulness) verified.
Selecting positive examples after the work is done is not very helpful. All it does is prove that at least sometimes somebody gets something useful out of using an LLM for a complex problem. Okay? I think most people understand that by now.
PS/Edit: Also, success stories we only hear about but cannot follow and reproduce may have been somewhat useful initially, but by now most people are beyond that, willing to give it a try, and would like to have a link to the working and reproducible example. I understand that work can rarely be shared, but then those examples are not very useful any more at this point. What would add real value for readers of these discussions now is when people who say they were successful posted the full, working, reproducible example.
EDIT 2: Another thing: I see comments from people who say they did tweak CLAUDE.md and got it to work. But the point is predictability and consistency! If you have that one project where you twiddled around with the file and added random sentences that you thought could get the LLM to do what you need, that's not very useful. We already know that trying out many things sometimes yields results. But we need predictability and consistency.
We are used to being able to try stuff, and when we get it working we could almost always confidently say that we found the solution, and share it. But LLMs are not that consistent.
My point is that these are not minor successes, and not occasional. Not every attempt is equally successful, but a significant majority of my attempts are. Otherwise I wouldn't be letting it run for longer and longer without intervention.
For me this isn't one project where I've "twiddled around with the file and added random sentences". It's an increasingly systematic approach to giving it an approach to making changes, giving it regression tests, and making it make small, testable changes.
I do that because I can predict with a high rate of success that it will achieve progress for me at this point.
There are failures, but they are few, and they're usually fixed simply by starting it over again from after the last succesful change when it takes too long without passing more tests. Occasionally it requires me to turn off --dangerously-skip-permissions and guide it through a tricky part. But that is getting rarer and rarer.
No, I haven't formally documented it, so it's reasonable to be skeptical (I have however started packaging up the hooks and agents and instructions that consistently work for me on multiple projects. For now, just for a specific client, but I might do a writeup of it at some point) but at the same time, it's equally warranted to wonder whether the vast difference in reported results is down to what you suggest, or down to something you're doing differently with respect to how you're using these tools.
replace 'AI|LLM' with 'new hire' in your post for a funny outcome.
Replace 'new hire' with 'AI|LLM' in the updated post for a very sad outcome.
this is the first time I've ever seen this joke, well done!
I had a highly repetitive task (/subagents is great to know about), but I didn't get more advanced than a script that sent "continue\n" into the terminal where CC was running every X minutes. What was frustrating is CC was inconsistent with how long it would run. Needing to compact was a bit of a curveball.
The compaction is annoying, especially when it sometimes will then fail to compact with an error, forcing rewinding. They do need to tighten that up so it doesn't need so much manual intervention...
if claude generates the tests, runs those tests, applies the fixes without any oversight, it is a very "who watches the watchmen" situation.
That is true, so don't give it entirely free reign with that. I let Claude generate as many additional tests as it'd like, but I either produce high level tests, or review a set generated by Claude first, before I let it fill in the blanks, and it's instructed very firmly to see a specific set of test cases as critical, and then increasingly "boxed in" with more validated test cases as we go along.
E.g. for my compiler, I had it build scaffolding to make it possible to run rubyspecs. Then I've had it systematically attack the crashes and failures mostly by itself once the test suite ran.
If you generate the tests, run those tests, apply fixes without any oversight, it is the very same situation. In reality, we have PR reviews.
Is it? Stuff like ripgrep, msmpt,… are very much one-man project. And most packages on distro are maintained by only one person. Expertise is a thing and getting reliable results is what differentiates expert from amateurs.
Gemini?
Good lord, that would be like the blind leading the daft.
Coding with Claude feels like playing a slot machine. Sometimes you get more or less what you asked, sometimes totally not. I don’t think it’s wise or sane to leave them unattended.
If you spend most of your time in planning mode, that helps considerably. It will almost always implement whatever it is that you planned together, so if you're willing to plan extensively enough you'll more or less know what you're going to get out of it when you finally set it loose.
Yes, and I think a lot of people are addicted to gambling. The dizzying highs when you win cloud out the losses. Even when you're down overall.
You are absolutely right!
That was a very robust and comprehensive comment
I found that using opus helps a lot. It's eyewateringly expensive though so I generally avoid it. I pay through the API calls because I don't tend to code much.
Genuinely interesting how divergent people's experiences of working with these models is.
I've been 5x more productive using codex-cli for weeks. I have no trouble getting it to convert a combination of unusually-structured source code and internal SVGs of execution traces to a custom internal JSON graph format - very clearly out-of-domain tasks compared to their training data. Or mining a large mixed python/C++ codebase including low-level kernels for our RISCV accelerators for ever-more accurate docs, to the level of documenting bugs as known issues that the team ran into the same day.
We are seeing wildly different outcomes from the same tools and I'm really curious about why.
You are asking it to do what it already knows, by feeding it in the prompt.
how did you measure your 5x productivity gain? how did you measure the accuracy of your docs?
Translation is not creation.
> Beyond this, if you’re working on novel code, LLMs are absolutely horrible at doing anything. A lot of assumptions are made, non-existent libraries are used, and agents are just great at using tokens to generate no tangible result whatsoever.
Not my experience. I've used LLMs to write highly specific scientific/niche code and they did great, but obviously I had to feed them the right context (compiled from various websites and books convered to markdown in my case) to understand the problem well enough. That adds additional work on my part, but the net productivity is still very much positive because it's one-time setup cost.
Telling LLMs which files they should look at was indeed necessary 1-2 years ago in early models, but I have not done that for the last half year or so, and I'm working on codebases with millions of lines of code. I've also never had modern LLMs use nonexistent libraries. Sometimes they try to use outdated libraries, but it fails very quickly once they try to compile and they quickly catch the error and follow up with a web search (I use a custom web search provider) to find the most appropriate library.
I'm convinced that anybody who says that LLMs don't work for them just doesn't have a good mental model of HOW LLMs work, and thus can't use them effectively. Or their experience is just outdated.
That being said, the original issue that they don't always follow instructions from CLAUDE/AGENT.md files is quite true and can be somewhat annoying.
> Not my experience. I've used LLMs to write highly specific scientific/niche code and they did great, but obviously I had to feed them the right context (compiled from various websites and books convered to markdown in my case) to understand the problem well enough. That adds additional work on my part, but the net productivity is still very much positive because it's one-time setup cost.
Which language are you using?
Rust, Python, and a bit of C++. Around 80% Rust probably
I've been genuinely surprised how well GPT5 does with rust! I've done some hairy stuff with Tokio/Arena/SIMD that I thought I would have to hand hold it through, and it got it.
Yeah, it has been really good in my experience. I've done some niche WASM stuff with custom memory layouts and parallelism and it did great there too, probably better than I could've done without spending several hours reading up on stuff.
It's pretty good at Rust, but it doesn't understand locking. When I tried it. It just put a lock on everything and then didn't take care to make sure the locks were released as soon as possible. This severely limited the scalability of the system it produced.
But I guess it passed the tests it wrote so win? Though it didn't seem to understand why the test it wrote where the client used TLS and the server didn't wouldn't pass and required a lot of hand holding along the way.
I've experienced similar things, but my conclusion has usually been that the model is not receiving enough context in such cases. I don't know your specific example, but in general it may not be incorrect to put an Arc/Lock on many things at once (or using Arc isntead of Rc, etc) if your future plans are parallelize several parts of your codebase. The model just doesn't know what your future plans are, and in errs on the side of "overengineering" solutions for all kinds of future possibilities. I found that this is a bias that these models tend to have, many times their code is overengineered for features I will never need and I need to tell them to simplify - but that's expected. How would the model know what I do and don't need in the future without me giving all the right context?
The same thing is true for tests. I found their tests to be massively overengineered, but that's easily fixed by telling them to adopt the testing style from the rest of the codebase.
I'm shocked that this isn't talked about more. The pro-AI astroturfing done everywhere (well, HN and Reddit anyway) is out of this world.
> and it adds _some_ value by thinking of edge cases I might’ve missed, best practices I’m unaware of, and writing better grammar than I do.
This is my most consistent experience. It is great at catching the little silly things we do as humans. As such I have found them to be most useful as PR reviewers which you take with a pinch of salt
> It is great at catching the little silly things we do as humans.
It's great, some of the time, the great draw of computing was that it would always catch the silly things we do as humans.
If it didn't we'd change the change code and the next time (and forever onward) it would catch that case too.
Now we're playing wack-a-mole and pleading with words like "CRITICAL" and bold text to our in .cursorrules to try and make the LLM pay attention, maybe it works today, might not work tomorrow.
Meanwhile the C-suite pushing these tools onto us still happily blame the developers when there's a problem.
> It's great, some of the time, the great draw of computing was that it would always catch the silly things we do as humans.
People are saying that you should write a thesis-length file of rules, and they’re the same people balking at programming language syntax and formalism. Tools like linters, test runners, compilers are reliable in a sense that you know exactly where the guardrails are and where to focus mentally to solve an issue.
Nailed it. The other side of the marketing hype cycle will be saner, when the market forces sort the wheat from the chaff.
Too much money was invested, it needs to be sold.
> brands injecting themselves into conversations on Reddit, LinkedIn, and every other public forum.
Don't forget HackerNews.
Every single new release from OpenAI and other big AI firms attracts a lot of new accounts posting surface-level comments like "This is awesome" and then a few older accounts that have exclusively posted on previous OpenAI-related news to defend them.
It's glaringly obvious, and I wouldn't be surprised if at least a third of the comments on AI-related news is astroturfing.
Sam Altman would agree with you that those posts are bots and lament it, but would simultaneously remain (pretend to be?) absurdly oblivious about his own fault in creating that situation.
https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-...
> "This is awesome"
Or the "I created 30 different .md instruction files and AI model refactored/wrote from scratch/fixed all my bugs" trope.
> a third of the comments on AI-related news is astroturfing.
I wouldn't be surprised if it's even more than that.. And, ironically, probably aided in their astroturfing, by the capability of said models to spew out text..
I personally always love the “I wrote an entire codebase with claud” posts where the response to “Can we see it?” is either the original poster disappearing into the mist until the next AI thread or “no I am under an NDA. My AI-generated code is so incredible and precious that my high-paying job would be at risk for disclosing it”
Someone posted these single file examples: https://github.com/joaopauloschuler/ai-coding-examples/tree/...
And they are usually 10x more productive as well!
NDA on AI generated code is funny since model outputs are technically someone else’s code. It’s amazing how we’re infusing all kinds of systems with potential license violations
If anyone actually believed those requests to see code were sincere, or if they at least generated interesting discussion, people might actually respond. But the couple of times I've linked to a blog post someone wrote about their vibe-coding experience in the comments, someone invariably responds with an uninteresting shallow dismissal shitting all over the work. It didn't generate any interesting discussion, so I stopped bothering.
https://mitchellh.com/writing/non-trivial-vibing went round here recently, so clearly LLMs are working in some cases.
And I think, in this blog post, the author stated that he does heavy editing of what’s generated. So I don’t know how much time is saved actually. You can get the same kind of inspiration from docs, books, or some SO answer.
Haters gonna hate, but the haters aren't always wrong. If you just want people to agree with you, that's not a discussion.
Honestly I've generated some big ISH codebases with AI and have said so and then backed off when asked.. because a) I still want to try to establish more confidence in the codebase and b) my employment contract gleefully states everything I write belongs to my employer. Both of those things make me nervous.
That said, I have no doubt there are also bots setting out to generate FOMO
Everything you wrote belongs to them. But it's not you, it's Claude is the author.
My experience is kind of the opposite of what you describe (working in big tech). Like, I'm easily hitting 10x levels of output nowadays, and it's purely enabled by agentic coding. I don't really have an answer for why everyone's experience is so different - but we should be careful to not paint in broad strokes our personal experience with AI: "everyone knows AI is bad" - nope!
What I suspect is it _heavily_ depends on the quality of the existing codebase and how easy the language is to parse. Languages like C++ really hurts the agent's ability to do anything, unless you're using a very constrained version of it. Similarly, spaghetti codebases which do stupid stuff like asserting true / false in tests with poor error messages, and that kind of thing, also cause the agents to struggle.
Basically - the simpler your PL and codebase, the better the error and debugging messages, the easier it is to be productive with the AI agents.
> we know that creating CLAUDE.md or cursorrules basically does nothing
While I agree, the only cases where I actually created something barely resembling useful (while still of subpar quality) was only after putting in CLAUDE.md lines like:
YOUR AIM IS NOT TO DELIVER A PROJECT. YOU AIM IS TO DO DEEP, REPETITIVE E2E TESTING. ONLY E2E TESTS MATTER. BE EXTREMELY PESSIMISTIC. NEVER ASSUME ANYTHING WORKS. ALWAYS CHECK EVERY FEATURE IN AT LEAST THREE DIFFERENT WAYS. USE ONLY E2E TESTS, NEVER USE OTHER TYPES OF TEST. BE EXTREMELY PESSIMISTIC. NEVER TRUST ANY CODE UNLESS YOU DEEPLY TEST IT E2E
REMEMBER, QUICK DELIVERY IS MEANINGLESS, IT'S NOT YOUR AIM. WORK VERY SLOWLY, STEP BY STEP. TAKE YOUR TIME AND RE-VERIFY EACH STEP. BE EXTREMELY PESSIMISTIC
With this kind of setup, it kind attempts to work in a slightly different way than it normally does and is able to build some very basic stuff although frankly I'd do it much better so not sure about the economics here. Maybe for people who don't care or won't be maintaining this code it doesn't matter but personally I'd never use it in my workplace.
My cynical working theory is this kind of thing basically never works but sometimes it just happens to coincide with useful code.
omg imagine giving these instructions to a junior developer to accompany his task.
> On the ground, we know that creating CLAUDE.md or cursorrules basically does nothing.
I don't agree with this. LLMs will go out of their way to follow any instruction they find in their context.
(E.g. i have "I love napkin math" in my kagi Agent Context, and every LLM will try to shoehorn some kind of napkin math into every answer.)
Cursor and Co do not follow these instructions because they:
(a) never make it into the context in the first place, or (b) fall out of the context window.
I use git so hallucinogenic AI decisions are easy to revert. Why wouldn't I ask it to clean up years of tech debt while I work on something novel?
The answer is really trivial and really embarrassingly simple, once you remove the engineering/functional/world improvement goggles. The answer is: because the rich folks invested a ton of money and they need it to work. Or at least to make most of the white collar work dependent on it, quality be damned. Hence the ever increasing pushing, nudging, advertising, offering to use the crap-tech everywhere. It seems now it will not win over the engineers. Unfortunately it seems to work with most of the general population. Every lazy recruiter out there is now using chatgpt to generate job summaries and "evaluate" candidates. Every "office worker" of the general type deadweight you meet at every company is happy to use it to produce more powerpoints, slides and documents for you drown in. And I won't even mention the "content" business model of the influencers.
At our place we have two types of users. One that is a deep evangelist, says it revolutionised their office work and has no idea that it might have accuracy problems. I guess those are the people that just create a lot of hot air.
The others tried it and ran into the obvious Achilles heels and are now pretty cautious. But use it for a thing or two.
They're using it, yes, but it's still heavily subsidized by VC, and it reminds to be seen whether it will remain as popular as prices percolate upwards.
Either way, layoffs all the same if this "doesn't work".
This idea is getting a lot of attention right now.
e.g. https://www.noahpinion.blog/p/americas-future-could-hinge-on...
I find it funny that the page subheader is "If the economy's single pillar goes down, Trump's presidency will be seen as a disaster".
Is it not a disaster already? The fast slide towards autocracy should certainly be viewed as a disaster if nothing else.
I actually hope to find better answers here than on cursor forum where people seems to be basically saying "it's you fault" instead of answering the actual question which is about trust, process, and real world use of agents..
So far it's just reinforcing my feeling that none of this is actually used at scale.. We use AI as relatively dumb companions, let them go wilder on side projects which have loser constraints, and Agent are pure hype (or for very niche use cases)
What specific improvements are you hoping for? Without them (in the original forum post) giving concrete examples, prompts, or methodology – just stating "I write good prompts" – it's hard to evaluate or even help them.
They came in primed against agentic work flow. That is fine. But they also came in without providing anything that might have given other people the chance to show that their initial assumptions was flawed.
I've been working with agents daily for several months. Still learning what fails and what works reliably.
Key insights from my experience: - You need a framework (like agent-os or similar) to orchestrate agents effectively - Balance between guidance and autonomy matters - Planning is crucial, especially for legacy codebases
Recent example: Hit a wall with a legacy system where I kept maxing out the context window with essential background info. After compaction, the agent would lose critical knowledge and repeat previous mistakes.
Solution that worked: - Structured the problem properly - Documented each learning/discovery systematically - Created specialized sub-agents for specific tasks (keeps context windows manageable)
Only then could the agent actually help navigate that mess of legacy code.
So at what point are you doing more work on the agent than working on the code directly ? And what are you losing in the process of shifting from the code author to LLM manager ?
My experience is that once I switch to this mode when something blows up I'm basically stuck with a bunch of code that I sort of know, even tough I reviewed it. I just don't have the same insight as I would if I wrote the code, no ownership, even if it was committed in my name. Like any misconceptions I've had about how things work I will still have because I never had to work through the solution, even if I got the final working solution.
With all that additional work, would you assess you have been more cost effective as just doing these tasks yourself with an Ai companion?
sounds like a huge waste of time
The reason why OP is getting terrible results is because he's using Cursor, and Cursor is designed to ruthlessly prune context to curtail costs.
Unlike the model providers, Cursor has to pay the retail price for LLM usage. They're fighting an ugly marginal price war. If you're paying more for inference than your competitors, you have to choose to either 1) deliver equal performance as other models at a loss or 2) economize by way of feeding smaller contexts to the model providers.
Cursor is not transparent on how it handles context. From my experience, it's clear that they use aggressive strategies to prune conversations to the extent that it's not uncommon that cursor has to reference the same file multiple times in the same conversation just to know what's going on.
My advice to anyone using Cursor is to just stop wasting your time. The code it generates creates so much debt. I've moved on to Codex and Claude and I couldn't be happier.
What deal is GitHub Copilot getting then? They also offer all SOTA models. Or is the performance of those models also worse there?
> Or is the performance of those models also worse there?
The context and output limit is heavily shrunk down on github copilot[0]. That's the reason why for example Sonnet 4.5 performs noticeably worse under copilot than in claude code.
[0] https://models.dev/?search=sonnet+4.5
Github Copilot is likely running models at or close to cost, given that Azure serves all those models. I haven't used Copilot in several months so I can't speak to its performance. My perception back then was that its underperformance relative to peers was because Microsoft was relatively late to the agentic coding game.
I've had agents find several production bugs that slipped me (as I couldn't dedicate enough time to chase down relatively obscure and isolated bug reports).
Of course there are many more bugs they'll currently not find, but when this strategy costs next to nothing (compared to a SWE spending an hour spelunking) and still works sometimes, the trade-off looks pretty good to me.
Exactly, the actual business value is way smaller people think and its honestly frustrating. Yes they can write boilerplate, yes they sometimes do better than humans in well understood areas. But its negligible considering all the huge issues that come with them. Big tech vendorlocks, data poisoning, unverifiable information, death of authenticity, death of creativity, ignorance of LLM evangelists, power hungriness in a time where humanity should look at how to decrease emissions, theft of original human work, theft of data big tech gets away with since way too long. Its puzzling to me how people actually think this is a net benefit to humanity.
Most of the issues you listed are moral and not technical. Especially "power hungriness in a time where humanity should look at how to decrease emissions", this may be what you think humanity should do but that is just that, what you think.
I derive a lot of business value from them, many of my colleagues do too. Many programmers that were good at writing code by hand are having lots of success with them, for example Thorsten Ball, Simon Willison, Mitchell Hashimoto. A recent example from Mitchell Hashimoto: https://mitchellh.com/writing/non-trivial-vibing.
>Its puzzling to me how people actually think this is a net benefit to humanity.
I've used them personally to quickly spin up a microblog where I could post my travel pictures and thoughts. The idea of making the interface like twitter (since that's what I use and know) was from me, not wanting to expose my family and friends to any specific predatory platform like twitter, instagram, etc was also from me, supabase as the backend was from a colleague (helped a lot!), the code was all Claude. The result is that they were able to enjoy my website, including my grandparents that just had to paste an URL on the website. I like to think of it a a perhaps very small but net benefit for a very small part of humanity.
if climate change doesn't matter why the hell should anyone care about your vibe coded personal twitter clone?
Is it a moral judgement to say that when the stove is on fire, we shouldn't be pouring more grease on it?
Is it a moral judgement to say that you shouldn't pick up a bear cub with its mother nearby?
If neither of these are moral judgements, then why would it be a moral judgement to say that humanity should be seeking to reduce its emissions? Just because you personally don't like it, and want to keep doing whatever you like?
So moral issues are not relevant? Typical tech enthusiast mindset unfortunately...
from a cursory (heh) reading of the cursor forum, it is clear that the participants in the chat are treating ai like the adeptus mechanicus treats the omnissiah.... the machine spirits aren't cooperating with them though.
> what prompted this post? well just tried to work with gpt5 and gemini pro
that's the problem. GPT5 doesn't work for coding. literally it burns tokens and does nothing my in experience. Claude 3.5, 4 and 4.5, on the other hand, are pretty solid and make lots of forward progress with minimal instruction. It takes iteration, some skill, and some hand coding! Yes, they forget things and do random things sometimes, but for me it's a big boost.
I think what we should really ask ourselves is: “Why do LLM experiences vary so much among developers?”
The simplest explanation would be “You’re using it wrong…”, but I have the impression that this is not the primary reason. (Although, as an AI systems developer myself, you would be surprised by the number of users who simply write “fix this” or “generate the report” and then expect an LLM to correctly produce the complex thing they have in mind.)
It is true that there is an “upper management” hype of trying to push AI into everything as a magic solution for all problems. There is certainly an economic incentive from a business valuation or stock price perspective to do so, and I would say that the general, non-developer public is mostly convinced that AI is actually artificial intelligence, rather than a very sophisticated next-word predictor.
While claiming that an LLM cannot follow a simple instruction sounds, at best, very unlikely, it remains true that these models cannot reliably deliver complex work.
Another theory: you have some spec in your mind, write down most of it and expect the LLM to implement it according to the spec. The result will be objectively a deviation from the spec.
Some developers will either retrospectively change the spec in their head or are basically fine with the slight deviation. Other developers will be disappointed, because the LLM didn't deliver on the spec they clearly hold in their head.
It's a bit like a psychological false memory effect where you misremember and/or some people are more flexibel in their expectations and accept "close enough" while others won't accept this.
At least, I noticed both behaviors in myself.
This is true. But, it's also true of assigning tasks to junior developers. You'll get back something which is a bit like what you asked for, but not done exactly how you would have done it.
Both situations need an iterative process to fix and polish before the task is done.
The notable thing for me was, we crossed a line about six months ago where I'd need to spend less time polishing the LLM output than I used to have to spend working with junior developers. (Disclaimer: at my current place-of-work we don't have any junior developers, so I'm not comparing like-with-like on the same task, so may have some false memories there too.)
But I think this is why some developers have good experiences with LLM-based tools. They're not asking "can this replace me?" they're asking "can this replace those other people?"
> They're not asking "can this replace me?" they're asking "can this replace those other people?"
People in general underestimate other people, so this is the wrong way to think about this. If it can't replace you then it can't replace other people typically.
But a junior developer can learn and improve based on the specific feedback you give them.
GPT5 will, at least to a first approximation, always be exactly as good or as bad as it is today.
What I want to see at this point are more screencasts, write-ups, anything really, that depict the entire process of how someone expertly wrangles these products to produce non-trivial features. There's AI influencers who make very impressive (and entertaining!) content about building uhhh more AI tooling, hello worlds and CRUD. There's experienced devs presenting code bases supposedly almost entirely generated by AI, who when pressed will admit they basically throw away all code the AI generates and are merely inspired by it. Single-shot prompt to full app (what's being advertised) rapidly turns to "well, it's useful to get motivated when starting from a blank slate" (ok, so is my oblique strategies deck but that one doesn't cost 200 quid a month).
This is just what I observe on HN, I don't doubt there's actual devs (rather than the larping evangelist AI maxis) out there who actually get use out of these things but they are pretty much invisible. If you are enthusiastic about your AI use, please share how the sausage gets made!
https://mitchellh.com/writing/non-trivial-vibing (not me)
From the article
Some people like to think for a while (and read docs) and just write it right at the first go. Some people like to build slowly and get a sense of where to go at each steps. But in all of those steps, there’s an heavy factor of expertise needed from the person doing the work. And this expertise does not comes for free.I can use agentic workflow fine and generate code like any other. But the process is not enjoyable and there’s no actual gain. Especially in an entreprise settings where you’re going to use the same stack for years.
https://simonwillison.net/2025/Oct/8/claude-datasette-plugin...
this is definitely closer to what I had in mind but it's still rather useless because it just shows what winning the lottery is like. what I am really looking for is neither the "Claude oneshot this" nor the "I gave up and wrote everything by hand" case but a realistic, "dirty" day-to-day work example. I wouldn't even mind if it was a long video (though some commentary would be nice in that case).
This is very similar to Tesla's FSD adoption in my mind.
For some (me), it's amazing because I use the technology often despite its inaccuracies. Put another way, it's valuable enough to mitigate its flaws.
For many others, it's on a spectrum between "use it sometimes but disengage any time it does something I wouldn't do" and "never use it" depending on how much control they want over their car.
In my case, I'm totally fine handing driving off to AI (more like ML + computer vision) most times but am not okay handing off my brain to AI (LLMs) because it makes too many mistakes and the work I'd need to do to spot-check them is about the same as I'd need to put in to do the thing myself.
> The simplest explanation would be...
The simplest explanation is that most of us are code monkeys reinventing the same CRUD wheel over and over again, gluing things together until they kind of work and calling it a day.
"developers" is such a broad term that it basically is meaningless in this discussion
or, and get this, software development is an enormous field with 100s of different kinds of variations and priorities and use cases.
lol.
another option is trying to convince yourself that you have any idea what the other 2,000,000 software devs are doing and think you can make grand, sweeping statements about it.
there is no stronger mark of a junior than the sentiment you're expressing
Well I know for a fact there are more code monkeys than rocket scientists working on advanced technologies. Just look at job offers really...
Anyone with any kind of experience in the industry should be able to tell that so idk where you're going with your "junior" comment. Technically I'm a senior in my company and I'm including myself in the code monkey category, I'm not working on anything revolutionary, as most devs are, just gluing things together, probably things that have been made dozens of times before and will be done dozens of time later... there is no shame in that, it's just the reality of software development. Just like most mechanics don't work on ferraris, even if mechanics working on ferraris do exist.
From my friends, working in small startups and large megacorps, no one is working on anything other than gluing existing packages together, a bit of es, a bit of postgres, a bit of crud, most of them worked on more technical things while getting their degrees 15 years ago than they are right now... while being in the top 5% of earners in the country. 50% of their job consist of bullshitting the n+1 to get a raise and some other variant of office politics
It’s kinda like this when they think the software they use is mainstream and everything else is niche.
> I think what we should really ask ourselves is: “Why do LLM experiences vary so much among developers?”
Some possible reasons:
With all of that, my success rate is pretty great and the statement about the tech not being able to "...barely follow a simple instruction" holds untrue. Then again, most of my projects are webdev adjacent in mostly mainstream stacks, YMMV.> Then again, most of my projects are webdev adjacent in mostly mainstream stacks
This is probably the most significant part of your answer. You are asking it to do things for which there are a ton of examples of in the training data. You described narrowing the scope of your requests too, which tends to be better.
It's true though, they can't. It really depends on what they have to work with.
In the fixed world of mathematics, everything could in principle be great. In software, it can in principle be okay even though contexts might be longer. When dealing with new contexts in something like real life, but different-- such as a story where nobody can communicate with the main characters because they speak a different language, then the models simply can't deal with it, always returning to the context they're familiar with.
When you give them contexts that are different enough from the kind of texts they've seen, they do indeed fail to follow basic instructions, even though they can follow seemingly much more difficult instructions in other contexts.
Well we are all doing different tasks on different codebases too. It's very often not discussed, even though it's an incredibly important detail.
But the other thing is that, your expectations normalise, and you will hit its limits more often if you are relying on it more. You will inevitably be unimpressed by it, the longer you use it.
If I use it here and there, I am usually impressed. If I try to use it for my whole day, I am thoroughly unimpressed by the end, having had to re-do countless things it "should" have been capable of based on my own past experience with it.
> Well we are all doing different tasks on different codebases too. It's very often not discussed, even though it's an incredibly important detail.
Absolutely nuts I had to scroll down this far to find the answer.Totally agree.
Maybe it's the fact that every software development job has different priorities, stakeholders, features, time constraints, programming models, languages, etc. Just a guess lol
> I think what we should really ask ourselves is: “Why do LLM experiences vary so much among developers?”
My hypothesis is that developers work on different things and while these models might work very well for some domains (react components?) they will fail quickly in others (embedded?). So one one side we have developers working on X (LLM good at it) claiming that it will revolutionize development forever and the other side we have developers working on Y (LLM bad at it) claiming that it's just a fad.
I think this is right on, and the things that LLM excels at (react components was your example) are really the things that there's just such a ridiculous amount of training data for. This is why LLMs are not likely to get much better at code. They're still useful, don't get me wrong, but they 5x expectations needs to get reined in.
A breadth and depth of training data is important, but modern models are excellent at in-context learning. Throw them documentation and outline the context for what they're supposed to do and they will be able to handle some out-of-distribution things just fine.
I would love to see some detailed failure cases of people who used agentic LLMs and didn't make it work. Everyone is asking for positive examples, but I want to see the other side.
"expect an LLM to correctly produce the complex thing they have in mind"
My guess is that for some types of work people don't know what the complex thing they have in mind is ex ante. The idea forms and is clarified through the process of doing the work. For those types of task there is no efficiency gain in using AI to do the work.
Why not? Just start iterating in chunks alongside the LLM and change gears/plan/spec as you learn more. You don't have to one-shot everything.
"Just start iterating in chunks alongside the LLM".
For those types of tasks it probably takes the same amount of time to form the idea without AI as with AI, this is what Metr found in its study of developer productivity.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o... https://arxiv.org/abs/2507.09089
That study design has some issues. But let's say it takes me the same amount of time, the agentic flow is still beneficial to me. It provides useful structure, helps with breaking down the problem. I can rubber duck, send off web research tasks, come back to answer questions, etc., all within a single interface. That's useful to me, and especially so if you have to jump around different projects a lot (consultancy). YMMV.
"That study design has some issues. " This is a study that tries to be scientific, unlike developer self reports and CEO promises of 10x.
Can you point to a better study on the impact of AI on developer productivity? The only other one I can think of finds a 20% uplift in productivity.
https://www.youtube.com/watch?v=tbDDYKRFjhk
> Why do LLM experiences vary so much among developers?
The question assumes that all developers do the same work. The kind of work done by an embedded dev is very different from the work of a front-end dev which is very different from the kind of work a dev at Jane Street does. And even then, devs work on different types of projects: greenfield, brownfield and legacy. Different kind of setups: monorepo, multiple repos. Language diversity: single language, multiple languages, etc.
Devs are not some kind of monolith army working like robots in a factory.
We need to look at these factors before we even consider any sort of ML.
Probably a good chunk of the differences in experience is this: https://news.ycombinator.com/item?id=45573521
> [..] possibly the repo is too far off the data distribution.
(Karpathy's quote)
I would say they can't reliably deliver simple work. They often can, but reliability, to me, means I can expect it to work every time. Or at least as much as any other software tool, with failure rates somewhere in the vicinity of 1 in 10^5, 1 in 10^6. LLMs fail on the order of 1 in 10 times for simple work. And rarely succeed for complex work.
That is not reliable, that's the opposite of reliable.
One has to look at the alternatives. What would i do if not use the LLM to generate the code? The two answers are “coding myself”, “asking an other dev to code it”. And neither of those approach anywhere a 10^5 failure rate. Not even close.
>I think what we should really ask ourselves is: “Why do LLM experiences vary so much among developers?”
Two of the key skills needed for effective use of LLMs are writing clear specifications (written communication), and management, skills that vary widely among developers.
There’s no clearer specifications than code, and I can manage my toolset just fines (lines of config, alias, and what not to make my job easier). That allowed me to deliver good results fine and fast without worrying if it’s right this time
I've known lots of people that don't know how to properly use Google, and Google has been around for decades. "You're using it wrong" is partially true, I'd say more something like "it is a new tool that changes very quickly, you have to invest a lot of time to learn how to properly use it, most people using it well have been using it a lot over the last two years, you won't catch up in an afternoon. Even after all that time, it may not be the best tool for every job" (proof on the last point being Karpathy saying he wrote nanochat mostly by hand).
It is getting easier and easier to get good results out of them, partially by the models themselves improving, partially by the scaffolding.
> non-developer public is mostly convinced that AI is actually artificial intelligence, rather than a very sophisticated next-word predictor
This is a false dichotomy that assumes we know way more about intelligence than we actually do, and also assumes than what you need to ship lots of high quality software is "intelligence".
>While claiming that an LLM cannot follow a simple instruction sounds, at best, very unlikely, it remains true that these models cannot reliably deliver complex work.
"reliably" is doing a lot of work here. If it means "without human guidance" it is true (for now), if it means "without scaffolding" it is true (also for now), if it means "at all" it is not true, if it means it can't increase dev productivity so that they ship more at the same level of quality, assuming a learning period, it is not true.
I think those conversations would benefit a lot from being more precise and more focused, but I also realize that it's hard to do so because people have vastly different needs, levels of experience, expectations ; there are lots of tools, some similar, some completely different, etc.
To answer your question directly, ie “Why do LLM experiences vary so much among developers?”: because "developer" is a very very very wide category already (MISRA C on a car, web frontend, infra automation, medical software, industry automation are all "developers"), with lots of different domains (both "business domains" as in finance, marketing, education and technical domains like networking, web, mobile, databases, etc), filled with people with very different life paths, very different ways of working, very different knowledge of AIs, very different requirements (some employers forbid everything except a few tools), very different tools that have to be used differently.
I sometimes meet devs who are "using it wrong" with under-baked prompts.
But mostly my experience is that people who regularly get good output from AI coding tools fall into these buckets:
A) Very limited scope (e.g. single, simple method with defined input/output in context)
B) Aren't experienced enough in the target domain to see the problems with the AI's output (let's call this "slop blindness")
C) Use AI to force multiple iterations of the same prompt to "shake out the bugs" automatically instead of using the dev's time
I don't see many cases outside of this.
It’s because people are using different tiers of AI and different models. And many people don’t stick with it long enough to get a more nuanced outlook of AI.
Take Joe. Joe sticks with AI and uses it to build an entire project. Hundreds of prompts. Versus your average HNer who thinks he’s the greatest programmer in the company and thinks he doesn’t need AI but tries it anyway. Then AI fails and fulfills his confirmation bias and he never tries it again.
At some point people won't care to convince you and you will be left to adapt or fade away.
That's where I stand now. I use LLMs in some agentic coding way 10h/day to great avail. If someone doesn't see or realize the value, then that's their loss.
Because the hype cycle on the original AI wave was fading so folks needed something new to buzz about to keep the hype momentum going. Seriously, that’s the reason.
Same with “context engineering”
Do you have any reasoning or anything else to back this up? Edit: honest question, not a diss or a dismissal.
It's an interesting take, one that I believe could be true, but it sounds more like an opinion than a thesis or even fact.
Folks aren’t seeing measurable returns on AI. Lots written about this. When the bean counters show up, the easiest way to get out of jail is to say “Oh X? Yeah that was last year, don’t worry about it… we’re now focused on Y which is where the impact will come from.”
Every hype cycle goes through some variation of this evolution. As much as folks try to say AI is different it’s following the same very predictable hype cycle curve.
The Gartner Hype Cycle [1]. I wonder were AI should be put on the graph, here in the 2025 fall. Just past the peak? Or are we not there yet.
[1]: https://en.wikipedia.org/wiki/Gartner_hype_cycle
As a scientist there is a ton of boiler plate code that is just slightly different enough for every data set I need to write it myself each time. So coding agents solve a lot of that. At least until you are halfway through something and you realize Claude didn’t listen when you wrote 5 times in capital letters NEVER MAKE UP DATA YOU ARE NOT ALLOWED TO USE np.random IN PLACE OF ACTUAL DATA. It’s all kind of wild because when it works it’s great and when it doesnt there’s no failure state. So if I put on my llm marketing hat I guess the solution is to have an agent that comes behind the coding agent that checks to see if it does its job. We can call it the Performance Improvement Plan Agent (PIPA). PIPAs allow real time monitoring of coding agents to make sure they are working and not slacking off allowing for HR departments and management teams to have full control over their AI employees. Together we will move into the future.
PIPA scary
Nigh thirty years ago when dabbling in AI I read a quote I will paraphrase as:
"when you hear 'intelligent agent'; think 'trainable ant'"
Say more?
Looks like it comes from Scientific American: https://spaf.cerias.purdue.edu/~spaf/Yucks/V5/msg00004.html
There are widely divergent views here. It'd be hard to have a good discussion unless people mention what tasks they're attempting and failing at. And we'll also have to ask if those tasks (or categories) are representative of mainstream developer effort.
Without mentioning what the LLMs are failing or succeeding at, it's all noise.
We'd need:
- language/framework
- problem space/domain
- SRE experience level
- LLM (model/version)
- agentic harness (claude code, codex, copilot, etc.)
- observed failure modes or win states
- experience wrangling these systems ("I touched ChatGPT once" vs "I spend 12h/day in Claude Code")
And there's more, is the engineer working on a single codebase for 10 years or do they jump around various projects all the time. Is it more greenfield, or legacy maintenance. Is it some frontier never-before-seen research project or CRUD? And so on.
For me, a big issue is that the performance of the AI tools varies enormously for different tasks. And it's not that predictable when it will fail, which does lead to quite a bit of wasted time. And while having more experience prompting a particular tool is likely to help here, it's still frustrating.
There is a bit of overlap for the stuff you use agents and the stuff that AI is good at. Like generating a bunch of boilerplate for a new thing from scratch. That makes the agent mode more convenient for me to interact with AI for the stuff it's useful in my case. But my experience with these tools is still quite limited.
When it works well you both normalise your expectations, and expand your usage, meaning you will hit its limits, and be even more disappointed when it fails at something you've seen it do well before.
The replies are all a variation of: "You're using it wrong"
> The replies are all a variation of: "You're using it wrong"
I don't know what you are trying to say with your post. I mean, if two persons feed their prompts to an agent and while one is able to reach their goals the other fails to achieve anything, would it be outlandish to suggest one of them is using it right whereas the other is using it wrong? Or do you expect the output to not reflect the input at all?
I expect the $500 billion magic machine to be magic. Especially after all the explicit threats to me and my friends livelihoods.
And yours is also "you are using it wrong" in the spirit.
Are they doing the same thing? Are they trying to achieve the same goals, but fail because one is lacking some skill?
One person may be someone who needs a very basic thing like creating a script to batch-rename his files, another one may be trying to do a massive refactoring.
And while the former succeeds, the latter fails. Is it only because someone doesn't know how to use agentic AI, or because agentic AI is simply lacking?
And some more variations that, in my anecdotal experience make or break the agentic experience:
* strictness of the result - a personal blog entry vs a complex migration to reform a production database of a large, critical system
* team constraints - style guides, peer review, linting, test requirements, TDD, etc
* language, frameworks - quick node-js app vs a java monolyth e.g.
* legacy - a 12+ year Django app vs a greenfield rust microservice
* context - complex, historical, nonsensical business constraints and flows vs a simple crud action
* example body - a simple crud TODO in PHP or JS, done a million times vs a event-sourced, hexagonal architecrtured, cryptographical signing system for govt data.
Of course the output reflects the input, that's why it's a bad idea to let the LLM run in a loop without constraints, it's simple maths, if something is 99% accurate, after 5 times is 95% accurate, after 10 steps it's about 90% accurate, after 100 times it's about 36% accurate.
For LLMs to be effective, you (or something else) needs to constantly find the errors and fix it.
I've had good experience getting a different LLM perform a technical review, then feed that back to the primary LLM but tell it to evaluate the feedback rather than just blindly accepting it.
You still have to have a hand on the wheel, but it helps a fair bit.
I’ve seen LLM catch and fix their own mistakes and literally tell me they were wrong and that they are fixing their self made wrong mistake. This analogy is therefore not accurate as error rate can actually decrease over time.
If we assume that each action has 99% success rate, and when it fails, it has 20% chance of recovery, and if the math here by gemini 2.5 pro is correct, that means the system will tend towards 95% chance of success.
===
In equilibrium, the probability of leaving the Success state must equal the probability of entering it.
Let P(S) be the probability of being in Success and P(F) be the probability of being in Failure. Since P(S) + P(F) = 1, we can say P(F) = 1 - P(S). Substituting that in:I saw them too. And after that, slip in another mistake.
In my experience it depends on which way the wind is blowing, random chance, and a lot of luck.
For example, I was working on the same kind of change across a few dozen files. The prompt input didn't change, the work didn't change, but the "AI" got it wrong as often as it got it right. So was I "using it wrong" or was the "AI" doing it wrong half the time? I tried several "AI" offerings and they all had similar results. Ultimately, the "AI" wasted as much time as it saved me.
[dead]
I've certainly gotten a lot of value from adapting my development practices to play to LLM's strengths and investing my effort where they have weaknesses.
"You're using it wrong" and "It could work better than it does now" can be true at the same time, sometimes for the same reason.
I find it quite funny that one of the users actually posted a fully AI-generated reply (dramatically different grammar and structure than their other posts).
Which is true. Like launching a Ferrari at 200mph without steering doesn’t take anyone anywhere, it’s just a very painful waste of money
Exactly one of two things is true:
1. The tool is capable of doing more than OP has been able to make it do
2. The tool is not capable of doing more than OP has been able to make it do.
If #1 is true, then... he must be using it wrong. OP specifically said:
> Please pour in your responses please. I really want to see how many people believe in agentic and are using it successfully
So, he's specifically asking people to tell him how to use it "right".
People want predictability from LLMs, but these things are inherently stochastic, not deterministic compilers. What’s working right now isn’t "prompting better," it’s building systems that keep the LLM on track over time: logging, retrying, verifying outputs, giving it context windows that evolve with the repo, etc.
That’s why we’ve been investing so much in multi-agent supervision and reproducibility loops at gobii.ai. You can’t just "trust" the model; you need an environment where it’s continuously evaluated, self-corrects, and coordinates with other agents (and humans) around shared state. Once you do that, it stops feeling like RNG and starts looking like an actual engineering workflow, distributed between humans and LLMs.
When I asked Claude "AI" to count the number of text file lines missing a given initial sub-string, it gave an improbably exaggerated result. When I challenged this, it replied "You are right! Let me try again this time without splitting long lines."
AI = Absent Intelligence.
I recommend you check out Andrej Karpathy’s 2 YouTube videos on how LLMs work (they are easy to find, but be forewarned they are long!). Once one digs in deeper it becomes clear why a model today might fail at the task you described.
Generally speaking, one of the behaviors I see in my day to day work leading engineers is that they often attempt to apply agentic coding tools to problema that don’t really benefit from them.
First answer: "you're prompting it wrong." I've heard that a few times now about demented autocomplete.
In machine learning, boosting is a way to combine weak learners into a strong one. Perhaps something similar can be done with language models?
look up Mixture of Experts, e.g. Mixtral
Like OP in the link, I'm confused too. And I use LLMs for coding every day! With precise prompts, function signatures provided, only using it for problems I know are solved [by others] etc.
The problem in this case is that LLMs are bad with golang, I don't write go, I am guessing from my experience with kotlin. I mainly use kotlin (rest apis) and LLMs are often bad at writing it. They e.g. confuse mockk and mockito functions and then agent spiral into a never ending loop of guessing what's wrong and trying to fix it in 5 different ways. Instead I use only chat, validate every output and point out errors they introduce.
On the other hand colleagues working with react and next have better experience with agents.
While I agree with the sentiment of not just letting it run free on the whole codebase and do what it wants, I still have good experience with letting it do small tasks one at a time, guided by me. Coding ability of models has really improved over the last few months itself and I seem to be clearing less and less AI-generated code mess than I was 5 months ago.
It's got a lot to do with problem framing and prompt imo.
My guess is that the reason why AI works bad for some people is the same reason why a lot of people make bad managers / product owners / team leads. Also the same reason why onboarding is atrocious in a lot of companies ("Here's your login, here's a link to the wiki that hasn't been updated since 2019, if you have any questions ask one of your very busy co-workers, they are happy to help").
You have to very good at writing tasks while being fully aware of what the one executing it knows and doesn't know. What agents can infer about a project themselves is even more limited than their context, so it's up to you to provide it. Most of them will have no or very limited "long-term" memory.
I've had good experiences with small projects using the latest models. But letting them sift through a company repo that has been worked on by multiple developers for years and has some arcane structures and sparse documentation - good luck with that. There aren't many simple instructions to be made there. The AI can still save you an hour or two of writing unit tests if they are easy to set up and really only need very few source files as context.
But just talking to some people makes it clear how difficult the concept of implicit context is. Sometimes it's like listening to a 4 year old telling you about their day. AI may actually be better at comprehending that sort of thing than I am.
One criticism I do have of AI in its current state is that it still doesn't ask questions often enough. One time I forgot to fill out the description of a task - but instead of seeing that as a mistake it just inferred what I wanted from the title and some other files and implemented it anyway. Correctly, too. In that sense it was the exact opposite of what OP was complaining about, but personally I'd rather have the AI assume that I'm fallible instead of confidently plowing ahead.
> what prompted this post? ... and they either omit one aspect of it or forget to update one part or the other. So it makes me wonder what the buzz of this agentic thing is really coming from
Because most of the time it does work? Especially when you learn how to prompt it clearly?
Yes it messes up sometimes. Just like people mess up sometimes. And it messes up in different ways from people.
I feel like I keep repeating this: just because a tool isn't perfect doesn't mean it isn't still valuable. Tools that work 90% of the time can still be a big help in end, as long as it's easy to tell when they fail and then you try another way.
I have the exact same question, what is hype all about when models can't do simple things. You prompt the model with generate one unit test for function and it somehow always generate more then one. (Just to start with most simple instruction)
I just feel that models are currently not up to speed with experienced engineers where it takes less time to develop something then to instruct model to do it. It is only usefull for boring work.
This is not to say that these tools didn't created oportunities to create new stuff, it is just that the hype overestimates the usefullnes of the tools so they can sell them better just like all other things.
completing boring work is still very useful when a large proportion of peoples day jobs are managing CRUD apps
i agree, these tools are usefull. i only oppose agresive marketing that llm is solution for everything. it is just a tool which has its use case, but to me it seems that it is not optimal for use cases that it is advertised.
i work on agentic systems and they can be good if agent has a bit-sized chuck of work it needs to do. problme with the coding agents is that for every more complex thing you will need to write a big prompt which is sometimes counter productive and it seems to me that user in cursor thread is pointing in that direction.
I love how the proposed solution is to essentially gaslighting the model to think that it's an expert programmer and then specify and re-specify the prompt until the solution is essentially inefficient pseudocode. Now we are in a world where amateur coders still cannot code or can't learn from their mistakes while experts are essentially JIRA ticket outsourcing specialists.
I don't think the models are dumb anymore, codex with gpt5 and claude code can design and build complex systems. The only thing is these models work great on greenfield projects. Legacy projects design evolves over a number of years and LLMs have hard time understanding those unwritten project design decisions
FWIW all my coding with LLMs is very hands-on. What I've ended up doing with LLMs is something like the following:
1. New conversation. Describe at a high level what change I want made. Point out the relevant files for the LLM to have context. Discuss the overall design with the LLM. At the end of that conversation, ask it to write out a summary (including relevant files to read for context next time) in an "epic" document in llm/epics/. This will almost always have several steps, listed in the document.
Then I review this and make sure it's in line with what I want.
2. New conversation. We're working on @llm/epics/that_epic.md. Please read the relevant files for context. We're going to start work on step N. Let me know if you have any questions; when you're ready, sketch out a detailed plan of implementation.
I may need to answer some questions or help it find more context; then it writes a plan. I review this plan and make sure it's in line with what I want.
3. New conversation. We're working on @llm/epics/that_epic.md. We're going to start implementing step N. Let me know if you have any questions; when you're ready, go ahead and start coding.
Monitor it to make sure it doesn't get stuck. Any time it starts to do something stupid or against the pattern of what I'd like -- from style, to hallucinating (or forgetting) a feature of some sub-package -- add something to the context files.
Repeat until the epic is done.
If this sounds like a lot of work, it is. As xkcd's "Uncomfortable Truths Well" said, "You will never find a programming language that frees you from the burden of clarifying your ideas." LLMs don't fundamentally change that dynamic. But they do often come up with clever solutions to problems; their "stupid questions" often helps me realize how unclear my thinking is; they type a lot faster, and they look up documentation a lot faster too.
Sure, they make a bunch of frustrating mistakes when they're new to the project; but if every time they make a patterned mistake, you add that to your context somehow, eventually these will become fewer and fewer.
Got get some of that oil/diamants/...., that's why.
2025 was the year when my fear of being replaced by an AI changed to fear of a big economic disaster caused by AI bubble
It's been a rollercoaster, and it's still not clear what's on the other side of the loop.
It feels to me that the OP on the forum expects this to work: "read this existing function, then read my mind and do stuff" (probably followed by "do better").
It still takes a lot of practice to get good at prompting, though.
Literally my manager
because they invested billions and now they have to justify it
i just got an aneurysm from reading the comments over there. are people having a stroke?
After so many months, Gemini pro still shits the bed after failing to update a file several times. I'd expect more from the culmination of human knowledge.
Management thinks a crutch can effectively replace people massively in sensitive knowledge work. When that crutch starts making errors that cost those businesses millions, or billions, well, hopefully management who implemented all that will get fired...
Yes, LLM are useful, but they are even less trustworthy than real humans, and one needs actual people to verify their output, so when agents write 100K lines of code, they'll make mistakes, extremely subtle ones, and not the kind of mistake any human operator would make.
"You're using it wrong" if a user cannot use a tool intuitively, the tool is not fit for purpose.
The most powerful tools are usually renowned to have the most arcane user interfaces.
Xkcd's "Uncomfortable Truths Well" said, "You will never find a programming language that frees you from the burden of clarifying your ideas." LLMs don't fundamentally change that dynamic.
[1] https://xkcd.com/568/
VC mumbo jumbo; you can apply this same logic to literally all of programming
Because there is a lot of money tied up in AI now, in a way that doesn't just reek like a bubble waiting to implode but even more stinks like a bunch of what used to be called "wash trading" [1]. And that's just the money side.
The "social kool-aid" side is even worse. A lot of very rich and very influential people have bet their career on AI - especially large companies who just outright fired staff to be replaced both by actual AI and "Actually Indians" [2] and are now putting insane pressure on their underlings and vendors to make something that at least looks on the surface like the promised AI dreams of getting rid of humans.
Both in combination explains why there is so much half-baked barely tested garbage (or to use the term du jour: slop) being pushed out and force fed to end users, despite clearly not being ready for prime time. And on top of that, the Pareto principle also works for AI - most of what's being pushed is now "good enough" for 80%, and everyone is trying to claim and sell that the missing 20% (that would require a lot of work and probably a fundamentally new architecture other than RNG-based LLMs) don't matter.
[1] https://www.bbc.com/news/articles/cz69qy760weo
[2] https://www.osnews.com/story/142488/ai-coding-chatbot-funded...
What I recently experienced on asking for a string manipulation routine that follows very arbitrary logic (for a long existing file format) that it forgots things like UTF string handling (in general, but also its subtle details requiring second round), its own code replacing special characters with escape sequence can be cut in half in limited width fileds (being an input for the function), considers some aspects of the specification document while omiting the others. Needs heavy supervision in the details and constant adjustments.
Yet, it makes the bulk of the work. Saves brain energy, that goes into the edge cases then. The overall time is the same, it is just the result could become more robust in the end. Only with good supervision! (which has better chance when we are not worn out with the tedious heavy lifting part)
But the one undebatable benefit is that the user can feel the smartest person in the whole wide world having so 'excellent questions', and 'knowing the topic like a pro', or being 'fantastic to spot such subtle details'. Anyone feel inadequate should use an agentic AI to boost self morale! (well, only if the person does not get nauseous from that thick flattering)
I am going to try and make it a habit to post this request on all LLM Coding questions -
Can we please make it a point to share the following information when we talk about experiences with code bots?
1) Language - gives us an idea if the language has a large corpus of examples or not
2) Project - what were you using it for?
3) Level of experience - neophyte coder? Dunning Krueger uncertainty? Experience in managing other coders? Understand project implementation best practices ?
From what I can tell/suspect, these 3 features are the likely sources of variation in outcomes.
I suspect level of experience is doing significant heavy lifting, because more experienced devs approach projects in a manner that avoids pitfalls from the get go.