H
HN HRCB top | articles | domains | dashboard | models | factions | about | exp
+0.17 Where's the shovelware? Why AI coding claims don't add up (mikelovesrobots.substack.com)
770 points by dbalatero 175 days ago | 482 comments on HN | Mild positive Editorial · v3.7 ·
Summary Labor Rights & Fair Working Conditions Advocates
This editorial critiques widespread claims about AI coding productivity, arguing that evidence contradicts the 10x productivity gains promised by vendors and enthusiasts. The author presents personal testing data and industry metrics showing no corresponding surge in software output, arguing that false productivity narratives harm developers through unjustified layoffs, wage suppression, and workplace pressure. The content strongly advocates for worker skepticism, evidence-based decision-making, and protection of labor rights against exploitation.
Article Heatmap
Preamble: +0.12 — Preamble P Article 1: +0.12 — Freedom, Equality, Brotherhood 1 Article 2: +0.12 — Non-Discrimination 2 Article 3: ND — Life, Liberty, Security Article 3: No Data — Life, Liberty, Security 3 Article 4: +0.09 — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: +0.12 — Equality Before Law 7 Article 8: +0.12 — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: -0.20 — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: ND — Freedom of Thought Article 18: No Data — Freedom of Thought 18 Article 19: +0.54 — Freedom of Expression 19 Article 20: +0.09 — Assembly & Association 20 Article 21: +0.12 — Political Participation 21 Article 22: +0.12 — Social Security 22 Article 23: +0.50 — Work & Equal Pay 23 Article 24: +0.09 — Rest & Leisure 24 Article 25: +0.12 — Standard of Living 25 Article 26: +0.12 — Education 26 Article 27: ND — Cultural Participation Article 27: No Data — Cultural Participation 27 Article 28: +0.09 — Social & International Order 28 Article 29: +0.09 — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Weighted Mean +0.17 Unweighted Mean +0.14
Max +0.54 Article 19 Min -0.20 Article 12
Signal 17 No Data 14
Confidence 29% Volatility 0.16 (Medium)
Negative 1 Channels E: 0.6 S: 0.4
SETL +0.23 Editorial-dominant
FW Ratio 57% 27 facts · 20 inferences
Evidence: High: 2 Medium: 10 Low: 5 No Data: 14
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.12 (3 articles) Security: 0.09 (1 articles) Legal: 0.12 (2 articles) Privacy & Movement: -0.20 (1 articles) Personal: 0.00 (0 articles) Expression: 0.25 (3 articles) Economic & Social: 0.21 (4 articles) Cultural: 0.12 (1 articles) Order & Duties: 0.09 (2 articles)
HN Discussion 20 top-level · 30 replies
wrs 2025-09-03 21:57 UTC link
This makes some sense. We have CEOs saying they're not hiring developers because AI makes their existing ones 10X more productive. If that productivity enhancement was real, wouldn't they be trying to hire all the developers? If you're getting 10X the productivity for the same investment, wouldn't you pour cash into that engine like crazy?

Perhaps these graphs show that management is indeed so finely tuned that they've managed to apply the AI revolution to keep productivity exactly flat while reducing expenses.

bjackman 2025-09-03 22:03 UTC link
There is actually a lot of AI shovelware on Steam. Sort by newest releases and you'll see stuff like a developer releasing 10 puzzle games in one day.

I have the same experience as OP, I use AI every day including coding agents, I like it, it's useful. But it's not transformative to my core work.

I think this comes down to the type of work you're doing. I think the issue is that most software engineering isn't in fields amenable to shovelware.

Most of us either work in areas where the coding is intensely brownfield. AI is great but not doubling anyone's productivity. Or, in areas where the productivity bottlenecks are nowhere near the code.

com2kid 2025-09-03 22:04 UTC link
Multiple things can be true at the same time:

1. LLMs do not increase general developer productivity by 10x across the board for general purpose tasks selected at random.

2. LLMs dramatically increases productivity for a limited subset of tasks

3. LLMs can be automated to do busy work and although they may take longer in terms of clock time than a human, the work is effectively done in the background.

LLMs can get me up to speed on new APIs and libraries far faster than I can myself, a gigantic speedup. If I need to write a small bit of glue code in a language I do not know, LLMs not only save me time, but they make it so I don't have to learn something that I'll likely never use again.

Fixing up existing large code bases? Productivity is at best a wash.

Setting up a scaffolding for a new website? LLMs are amazing at it.

Writing mocks for classes? LLMs know the details of using mock libraries really well and can get it done far faster than I can, especially since writing complex mocks is something I do a couple times a year and completely forget how to do in-between the rare times I am doing it.

Navigating a new code base? LLMs are ~70% great at this. If you've ever opened up an over-engineered WTF project, just finding where HTTP routes are defined at can be a problem. "Yo, Claude, where are the route endpoints in this project defined at? Where do the dependency injected functions for auth live?"

Right tool, right job. Stop using a hammer on nails.

kenjackson 2025-09-03 22:06 UTC link
Shovelware may not be a good way to track additional productivity.

That said, I’m skeptical that AI is as helpful for commercial software. It’s been great for in automating my workflow because I suck at shell scripting and AI is great at it. But most of the code I write I honestly halfway don’t know what I’m going to write until I write it. The prompt itself is where my thinking goes - so the time savings would be fairly small, but I also think I’m fairly skilled (except at scripting).

captainkrtek 2025-09-03 22:06 UTC link
This tracks with my own experience as well. I’ve found it useful in some trivial ways (eg: small refactors, type definition from a schema, etc.) but so far tasks more than that it misses things and requires rework, etc. The future may make me eat my words though.

On the other hand, I’ve lately seen it misused by less experienced engineers trying to implement bigger features who eagerly accept all it churns out as “good” without realizing the code it produced:

- doesn’t follow our existing style guide and patterns.

- implements some logic from scratch where there certainly is more than one suitable library, making this code we now own.

- is some behemoth of a PR trying to do all the things.

some-guy 2025-09-03 22:07 UTC link
These claims wouldn't matter if the topic weren't so deadly serious. Tech leaders everywhere are buying into the FOMO, convinced their competitors are getting massive gains they're missing out on. This drives them to rebrand as AI-First companies, justify layoffs with newfound productivity narratives, and lowball developer salaries under the assumption that AI has fundamentally changed the value equation.

This is my biggest problem right now. The types of problems I'm trying to solve at work require careful planning and execution, and AI has not been helpful for it in the slightest. My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company". The mass hysteria among SVPs and PMs is absolutely insane right now, I've never seen anything like it.

larve 2025-09-03 22:08 UTC link
In case the author is reading this, I have the receipts on how there's a real step function in how much software I build, especially lately. I am not going to put any number on it because that makes no sense, but I certainly push a lot of code that reasonably seems to work.

The reason it doesn't show up online is that I mostly write software for myself and for work, with the primary goal of making things better, not faster. More tooling, better infra, better logging, more prototyping, more experimentation, more exploration.

Here's my opensource work: https://github.com/orgs/go-go-golems/repositories . These are not just one-offs (although there's plenty of those in the vibes/ and go-go-labs/ repositories), but long-lived codebases / frameworks that are building upon each other and have gone through many many iterations.

jryio 2025-09-03 22:10 UTC link
I completely agree with the thesis here. I also have not seen a massive productivity boost with the use of AI.

I think that there will be neurological fatigue occurring whereby if software engineers are not actively practicing problem-solving, discernment, and translation into computer code - those skills will atrophy...

Yee, AI is not the 2x or 10x technology of the future ™ is was promised to be. It may the case that any productivity boost is happening within existing private code bases. Even still, there should be a modest uptick in noticeably improved offer deployment in the market, which does not appear to be there.

In my consulting practice I am seeing this phenomenon regularly, wereby new founders or stir crazy CTOs push the use of AI and ultimately find that they're spending more time wrangling a spastic code base than they are building shared understanding and working together.

I have recently taken on advisory roles and retainers just to reinstill engineering best practices..

rglover 2025-09-03 22:11 UTC link
Most of it doesn't exist beyond videos of code spraying onto a screen alongside a claim that "juniors are dead."

I think the "why" for this is that the stakes are high. The economy is trembling. Tech jobs are evaporating. There's a high anxiety around AI being a savior, and so, a demi-religion is forming among the crowd that needs AI to be able to replace developers/competency.

That said: I personally have gotten impressive results with AI, but you still need to know what you're doing. Most people don't (beyond the beginner -> intermediate range), and so, it's no surprise that they're flooding social media with exaggerated claims.

If you didn't have a superpower before AI (writing code), then having that superpower as a perceived equalizer is something that you will deploy all resources (material, psychological, etc) to ensuring that everyone else maintain the position that 1) superpower good, 2) superpower cannot go away 3) the superpower being fallible should be ignored.

Like any other hype cycle, these people will flush out, the midpoint will be discovered, and we'll patiently await the next excuse to incinerate billions of dollars.

throwaway13337 2025-09-03 22:12 UTC link
Great angle to look at the releases of new software. I, too, thought we'd see a huge increase by now.

An alternative theory is that writing code was never the bottleneck of releasing software. The exploration of what it is you're building and getting it on a platform takes time and effort.

On the other hand, yeah, it's really easy to 'hold it wrong' with AI tools. Sometimes I have a great day and think I've figured it out. And then the next day, I realize that I'm still holding it wrong in some other way.

It is philosophically interesting that it is so hard to understand what makes building software products hard. And how to make it more productive. I can build software for 20 years and still feel like I don't really know.

benjiro 2025-09-03 22:17 UTC link
I need to agree with the author, with a caveat. He is a well developed developer. For somebody like him, churning out good quality code is probably easy.

Where i expect to see a lot of those metrics of feeling fast come from, is from people who may have less coding experience, and with AI are coding way above their level.

My brother in law asks for a nice product website, i just feed his business plan into a LLM, do some fine tuning on the results, and have a good looking website in a hour time. If i did it myself manually, just take me behind a barn as those jobs are so boring and take for ages. But i know that website design is a weakness of mine.

That is the power of LLMs. Turn out quick code, maybe offer some suggestion you did not think about, but ... it also eats time! Making your prompts so that the LLM understands, waiting for the result, ... waiting ... ok, now check the result, can you use it? O no, it did X, Y, Z wrong. Prompt again ... and again. And this is where your productivity goes to die.

So when you compare a pool of developer feedback, your going to get a broad "it helps a lot", "some", "is worse then my code", ... mix in with the prompting, result delays etc...

It gets even worse with Agent / Vibe coding, as you just tend to be waiting, 5, 10min for changed to be done. You need to review them, test them, ... o no, the LLM screwed something up again. O no, it removed 50% of my code. Hey, where did my comments go. And we are back to a loss of time.

LLMs are a tool... But after a lot of working with them, my opinion is to use them when needed but do not depend on them for everything. I sometimes look with cow eyes when people say they are coding so much with LLMs and spending 200, or more bucks per month.

They can be powerful tools, but i feel that some folks become so over dependent on them. And worst is my feeling that our juniors are going to be in a world of hurt, if their skills are more LLM monkey coding (or vibe coding), then actually understanding how to code (and the knowledge behind the actual programming languages and systems).

searls 2025-09-03 23:05 UTC link
The answer is that we're making it right now. AI didn't speed me up at all until agents got good enough, which was April/May of this year.

Just today I built a shovelware CLI that exports iMessage archives into a standalone website export. Would have taken me weeks. I'll probably have it out as a homebrew formula in a day or two.

I'm working on an iOS app as well that's MUCH further along than it would be if I hand-rolled it, but I'm intentionally taking my time with it.

Anyway, the post's data mostly ends in March/April which is when generative AI started being useful for coding at all (and I've had Copilot enabled since Nov 2022)

NathanKP 2025-09-03 23:07 UTC link
I think the explanation is simple: there is a direct correlation between being too lazy and demotivated to write your own code, and being too lazy and demotivated to actually finish a project and publish your work online.

The same people who are willing to go through all the steps to release an application online are also willing to go through the extra effort of writing their own code. The code is actually the easy part compared to the rest of it... always has been.

stillsut 2025-09-03 23:39 UTC link
Got your shovelware right here...with receipts.

Background: I'm building a python package side project which allows you to encode/decode messages into LLM output.

Receipts: the tool I'm using creates a markdown that displays every prompt typed, and every solution generated, along with summaries of the code diffs. You can check it out here: https://github.com/sutt/innocuous/blob/master/docs/dev-summa...

Specific example: Actually used a leet-code style algorithms implementation of memo-ization for branching. This would have taken a couple of days to implement by hand, but it took about 20 minutes to write the spec and 20 minutes to review solutions and merge the solution generated. If you're curious you can see this diff generated here: https://github.com/sutt/innocuous/commit/cdabc98

m-hodges 2025-09-04 00:08 UTC link
This article reminds me of two recent observations by Paul Krugman about the internet:

"So, here’s labor productivity growth over the 25 years following each date on the horizontal axis [...] See the great productivity boom that followed the rise of the internet? Neither do I. [...] Maybe the key point is that nobody is arguing that the internet has been useless; surely, it has contributed to economic growth. The argument instead is that its benefits weren’t exceptionally large compared with those of earlier, less glamorous technologies."¹

"On the second, history suggests that large economic effects from A.I. will take longer to materialize than many people currently seem to expect [...] And even while it lasted, productivity growth during the I.T. boom was no higher than it was during the generation-long boom after World War II, which was notable in the fact that it didn’t seem to be driven by any radically new technology [...] That’s not to say that artificial intelligence won’t have huge economic impacts. But history suggests that they won’t come quickly. ChatGPT and whatever follows are probably an economic story for the 2030s, not for the next few years."²

¹ https://www.nytimes.com/2023/04/04/opinion/internet-economy....

² https://www.nytimes.com/2023/03/31/opinion/ai-chatgpt-jobs-e...

InCom-0 2025-09-04 00:39 UTC link
On one hand I don't understand what all the fuss is about. LLMs are great at all kinds of things around and about: searching for (good) information, summarizing existing text, conceptual discussions where it points you in the right directions very quickly, etc. ..... they are just not great (some might say harmful) at straight up non-trivial code generation or design of complex systems with the added peculiarity that on the surface the models seem almost capable to do it but never quite ... which is sort their central feature: producing text so that it is seems correct from statistical perspective, but without actual reasoning.

On the other hand, I do understand that the things the LLMs are really great at is not actually all that spectacular to monetize ... and so as a result we have all these snake oil salesmen on every corner boasting about nonsensical vibecoding achievements, because that's where the real money would be ... if it were really true ... but it is not.

raylad 2025-09-04 02:53 UTC link
I used to be a full-time developer back in the day. Then I was a manager. Then I was a CTO. I stopped doing the day-to-day development and even stopped micro-managing the detailed design.

When I tried to code again, I found I didn't really have the patience for it -- having to learn new frameworks, APIs, languages, tricky little details, I used to find it engrossing: it had become annoying.

But with tools like Claude Code and my knowledge about how software should be designed and how things should work, I am able to develop big systems again.

I'm not 20% more productive than I was. I'm not 10x more productive than I was either. I'm infinity times more productive because I wouldn't be doing it at all otherwise, realistically: I'd either hire someone to do it, or not do it, if it wasn't important enough to go through the trouble to hire someone.

Sure, if you are a great developer and spend all day coding and love it, these tools may just be a hindrance. But if you otherwise wouldn't do it at all they are the opposite of that.

solatic 2025-09-04 06:00 UTC link
I'm not sure what to make of these takes because so many people are using such an enormous variety of LLM tooling in such a variety of ways, people are going to get a variety of results.

Let's take the following scenario for the sake of argument: a codebase with well-defined AGENTS.md, referencing good architecture, roadmap, and product documentation, and with good test coverage, much of which was written by an LLM and lightly reviewed and edited by a human. Let's say for the sake of argument that the human is not enjoying 10x productivity despite all this scaffolding.

Is it still worthwhile to use LLM tooling? You know what, I think a lot of companies would say yes. There are way too many companies whose codebases lack testing and documentation, that are too difficult to on-board new engineers and have too high risk if the original engineers are lost. The simple fact that LLMs, to be effective, force the adaptation of proper testing and documentation is a huge win for corporate software.

weweersdfsd 2025-09-04 09:16 UTC link
The problem with current GenAI is the same as in outsourcing to lowest bidder in India or whatever. For any non-trivial project you'll get something that may appear to work out of it, but for anything production-ready you'll most likely you'll spend lots of time testing, verifying, cleaning up the code and making changes to things AI didn't catch. Then there's requirement gathering, discussing with stakeholders, gathering more feedback and so on, debugging when things fail in production...

I believe it's a productivity boost, but only to a small part of my job. The boost would be larger if only had to build proof-of-concepts or hobby projects that don't need to be reliable in prod, and don't require feedback and requirements from many other people.

iainctduncan 2025-09-04 16:58 UTC link
This reminds me of something... I'm a jazz musician when not being a coder, and have studied and taught from/to a lot of players. One thing advanced improvisors notice is that the student is very frequently not a good judge – in the moment – of what is making them better. Doing long term analytics tests (as the author did) works, but knowing how well something is working while you're doing it? not so much. Very, very frequently that which feels productive isn't, and that which feels painful and slow is.

Just spit balling here, but it sure feels similar.

trenchpilgrim 2025-09-03 22:14 UTC link
Same. On many days 90% of my code output by lines is Claude generated and things that took me a day now take well under an hour.

Also, a good chunk of my personal OSS projects are AI assisted. You probably can't tell from looking at them, because I have strict style guides that suppress the "AI style", and I don't really talk about how I use AI in the READMEs. Do you also expect I mention that I used Intellisense and syntax highlighting too?

nicce 2025-09-03 22:16 UTC link
> implements some logic from scratch where there certainly is more than one suitable library, making this code we now own - is some behemoth of a PR trying to do all the things

Depending on the amount of code, I see this only as positive? Too often people pull huge libraries for 50 lines of code.

rglover 2025-09-03 22:19 UTC link
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company".

Lord, forgive them, they know not what they do.

fennecbutt 2025-09-03 22:20 UTC link
I mean the truth should be fairly obvious to people given a lot of the talk around AI stuff rings very much like the ifls/mainstream media style "science" articles which always make some outrageous "right around the corner" claim based off some small tidbit out of a paper they only skimmed the abstract of.
fennecbutt 2025-09-03 22:24 UTC link
Granted, _discovery_ of such things is something I'm still trying to solve at my own job and potentially llms can at least be leveraged to analyse and search code(bases) rather than just write it.

It's difficult because you need team members to be able to work quite independently but knowledge of internal libraries can get so siloed.

balder1991 2025-09-03 22:28 UTC link
Also when vou create a product you can’t speed up the iterative process of seeing how users want it, fixing edge cases that you only realized later etc. these are the things that make a product good and why there’s that article about software taking 10 years to mature: https://www.joelonsoftware.com/2001/07/21/good-software-take...
quantumcotton 2025-09-03 22:36 UTC link
Today you will learn what diminishing returns are :)

You can only utilize so many people or so much action within a business or idea.

Essentially it's throwing more stupid at a problem.

The reason there are so many layoffs is because of AI creating efficiency. The thing that people don't realize is it's not that one AI robot or GPU is going to replace one human at a one to one ratio. It's going to replace the amount of workload one person can do. Which in turn gets rid of one human employee. It's not that you job isn't taken by AI. It's started. But how much human is needed is where the new supply demand lies and how long the job lasts. There will always be more need for more creative minds. The issue is we are lacking them.

It's incredible how many software engineers I see walking around without jobs. Looking for a job making $100,000 to $200,000 a year. Meanwhile, they have no idea how much money they could save a business. Their creativity was killed by school.

They are relying on somebody to tell them what to do and when nobody's around to tell anybody what to do. They all get stuck. What you are seeing isn't a lack of capability. It's a lack of ability to control direction or create an idea worth following.

moduspol 2025-09-03 22:49 UTC link
A lot of these C-suite people also expect the remaining ones to be replaced by AI. They subscribe to the hockey-stick "AGI is around the corner" narrative.

I don't, but at least it is somewhat logical. If you truly believe that, you wouldn't necessarily want to hire more developers.

Nextgrid 2025-09-03 22:53 UTC link
This is the answer. Programming was never the bottleneck in delivering software, whether free-range, organic, grass-fed human-generated code or AI-assisted.

AI is just a convenient excuse to lay off many rounds of over-hiring while also keeping the door open for potential investors to throw more money into the incinerator since the company is now “AI-first”.

lumost 2025-09-03 22:58 UTC link
The experience in green field development is very different. In the early days of a project, the LLMs opinion is about as good as the individuals starting the project. The coding standards and other items have not yet been established. The buggy/half nonsense code means that the project is still demo able. Being able to explore 5 projects to demo status instead of 1 is a major boost.
heavyset_go 2025-09-03 22:59 UTC link
> LLMs can get me up to speed on new APIs and libraries far faster than I can myself, a gigantic speedup. If I need to write a small bit of glue code in a language I do not know, LLMs not only save me time, but they make it so I don't have to learn something that I'll likely never use again.

I wax and wane on this one.

I've had the same feelings, but too often I've peaked behind the curtain, read the docs and got familiar with external dependencies and then realize whatever the LLM responds with paradoxically either wasn't following convention or tried to shoehorn your problem to fit code examples found online, used features inappropriately, took a long roundabout path to do something that can be done simply, etc.

It can feel like magic until you look too closely at it, and I worry that it'll make me complacent with the feeling of understanding without actually taking away an understanding.

SchemaLoad 2025-09-03 23:08 UTC link
At least in my experience, it excels in blank canvas projects. Where you've got nothing and want something pretty basic. The tools can probably set up a fresh React project faster than me. But at least every time I've tried them on an actual work repo they get reduced to almost useless.

Which is why they generate so much hype. They are perfect for tech demos, then management wonders why they aren't seeing results in the real world.

heavyset_go 2025-09-03 23:13 UTC link
> I think that there will be neurological fatigue occurring whereby if software engineers are not actively practicing problem-solving, discernment, and translation into computer code - those skills will atrophy...

I've found this to be the case with most (if not all) skills, even riding a bike. Sure, you don't forget how to ride it, but your ability to expertly articulate with the bike in a synergistic and tool-like way atrophies.

If that's the case with engineering, and I believe it to be, it should serve as a real warning.

Seattle3503 2025-09-03 23:16 UTC link
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company".

If we can delegate incident response to automated LLMs too, sure, why not. Let the CEO have his way and pay the reputational price. When it doesn't work, we can revert our git repos to the day LLMs didn't write all the code.

I'm only being 90% facetious.

nerevarthelame 2025-09-03 23:18 UTC link
How are you sure it's increasing your productivity if it "makes no sense" to even quantify that? What are the receipts you have?
heavyset_go 2025-09-03 23:20 UTC link
As the rate of profit drops, value needs to be squeezed out of somewhere and that will come from the hiring/firing and compensation of labor, hence a strong bias towards that outcome.

99% of the draw of AI is cutting labor costs, and hiring goes against that.

That said, I don't believe AI productivity claims, just pointing out a factor that could theoretically contribute to your hypothetical.

anp 2025-09-03 23:30 UTC link
FWIW this closely matches my experience. I’m pretty late to the AI hype train but my opinion changed specifically because of using combinations of models & tools that released right before the cut off date for the data here. My impression from friends is that it’s taken even longer for many companies to decide they’re OK with these tools being used at all, so I would expect a lot of hysteresis on outputs from that kind of adoption.

That said I’ve had similar misgivings about the METR study and I’m eager for there to be more aggregate study of the productivity outcomes.

ksenzee 2025-09-03 23:32 UTC link
> Stop using a hammer on nails.

sorry, what am I supposed to use on nails?

mvdtnz 2025-09-03 23:33 UTC link
> LLMs can be automated to do busy work and although they may take longer in terms of clock time than a human, the work is effectively done in the background.

What is this supposed busy work that can be done in the background unsupervised?

I think it's about time for the AI pushers to be absolutely clear about the actual specific tasks they are having success with. We're all getting a bit tired of the vagueness and hand waving.

mvdtnz 2025-09-03 23:36 UTC link
> AI didn't speed me up at all until agents got good enough, which was April/May of this year.

That was 5 months ago, which is 6 years in 10x time.

sumeno 2025-09-03 23:38 UTC link
It's amazing how whenever criticisms pop up the responses for the last 3 years have been "well you aren't using <insert latest>, it's finally good!"
Noumenon72 2025-09-03 23:45 UTC link
You should have used the word "steganography" in this description like you did in your readme, makes it 100% more clear what it does.
noidesto 2025-09-04 00:22 UTC link
Agree. In the hands of a seasoned dev not only does productivity improve but the quality of outputs.

If I’m working against a deadline I feel more comfortable spending time on research and design knowing I can spend less time on implementation. In the end, it took the same amount of time, though hopefully with an increase of reliability, observability, and extendibility. None of these things show up in the author’s faulty dataset and experiment.

abathologist 2025-09-04 01:40 UTC link
My theory is that the digital revolution has mostly cancelled out potential productivity gains with its introduction of productivity sinks: the technology has tended to encourage less rigorous thinking, more distraction, more complexity; and even if you can do task T X times faster, most people as spending X * Y more time being distracted, overwhelmed, or just reflective button pushers.

The ways AI is being used now will make this a lot worse on every front.

bwfan123 2025-09-04 01:57 UTC link
> writing code was never the bottleneck

This is an insightful observation.

When working on anything, I am asked: what is the smallest "hard" problem that this is solving ? ie, in software, value is added by solving "hard" problems - not by solving easy problems. Another way to put it is: hard problems are those that are not "templated" ie, solved elsewhere and only need to be copied.

LLMs are allowing the easy problems to be solved faster. But the real bottleneck is in solving the hard problems - and hard problems could be "hard" due to technical reasons, or business reasons or customer-adoption reasons. Hard problems are where value lies particularly when everyone has access to this tool, and everyone can equally well create or copy something using it.

In my experience, LLMs have not yet made a dent in solving the hard problems because, they dont really have a theory of how something really works. On the other hand, they have really helped boost productivity for tasks that are templated .

rootusrootus 2025-09-04 01:59 UTC link
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate

That's insane. Who the hell pulls a number out of their ass and declares it the new reality? When it doesn't happen, he'll pin the blame on you, but everyone else above will pin the blame on him. He's the one who will get fired.

Laying off unnecessary developers is the answer if LLMs turn out to make us all so much more productive (assuming we don't just increase the amount of software written instead). But that happens after successful implementation of LLMs into the development process, not in advance.

Starting to think I should do the inadvisable and move my investments far far away from the S&P 500 and into something that will survive the hype crash that can't be too far off now.

dmonitor 2025-09-04 02:31 UTC link
By "knowing what you're doing" do you mean "have enough experience to it by hand", "have experience with a specific AI tool and its limitations" or a combination?
ferrous69 2025-09-04 02:59 UTC link
my grand theory on AI coding tools is that they don't really save on time, but they massively save on annoyance. I can save my frustration budget for useful things instead of fiddling with syntax or compiler messages or repetitive tasks, and oftentimes this means I'll take on a task I would find too frustrating in an already frustrating world, or stay at my desk longer before needing to take a walk or ditch the office for the bar.
kobe_bryant 2025-09-04 03:23 UTC link
wow, not just one but multiple big systems? well, share the details with us
mildweed 2025-09-04 03:54 UTC link
Interested in this Homebrew. Share when ready?
Editorial Channel
What the content says
+0.70
Article 19 Freedom of Expression
High Advocacy
Editorial
+0.70
SETL
+0.53

Strong exercise of free expression and opinion: presents evidence-based critique counter to mainstream industry narratives with confrontational language and specific demands for accountability

+0.70
Article 23 Work & Equal Pay
High Advocacy
Editorial
+0.70
SETL
+0.59

Core argument of article: defends workers' right to fair wages and working conditions against exploitation through false productivity narratives used to justify layoffs and salary suppression; advocates for evidence-based assessment of work conditions

+0.20
Preamble Preamble
Medium Advocacy
Editorial
+0.20
SETL
+0.20

Content advocates implicitly for 'freedom, justice and peace' through defense of workers against exploitative labor practices and false narratives

+0.20
Article 1 Freedom, Equality, Brotherhood
Medium Advocacy
Editorial
+0.20
SETL
+0.20

Defends developers' equal standing and professional dignity against devaluing narratives that suppress wages and justify unjust treatment

+0.20
Article 2 Non-Discrimination
Medium Advocacy
Editorial
+0.20
SETL
+0.20

Critiques discriminatory employment practices (layoffs, wage suppression) based on unverified productivity claims

+0.20
Article 7 Equality Before Law
Medium Advocacy
Editorial
+0.20
SETL
+0.20

Advocates for equal legal protection by demanding evidence-based decision-making and opposing arbitrary management actions

+0.20
Article 8 Right to Remedy
Medium Advocacy
Editorial
+0.20
SETL
+0.20

Provides workers with data and evidence to challenge false productivity claims, empowering remedy and recourse

+0.20
Article 21 Political Participation
Medium Advocacy
Editorial
+0.20
SETL
+0.20

Encourages workers to participate in workplace governance by challenging management claims with evidence and demanding accountability

+0.20
Article 22 Social Security
Medium Advocacy
Editorial
+0.20
SETL
+0.20

Critiques threats to job security: 'People are being fired because they're not adopting these tools fast enough' and 'People are sitting in jobs they don't like because they're afraid'

+0.20
Article 25 Standard of Living
Medium Advocacy
Editorial
+0.20
SETL
+0.20

Addresses adequate standard of living through critique of salary suppression based on false productivity claims

+0.20
Article 26 Education
Medium Advocacy
Editorial
+0.20
SETL
+0.20

Critiques lack of adequate education and training: 'Experiment and figure it out yourself is the common advice... the official prompting guides are apparently not worth paying attention to because they don't work'

+0.15
Article 4 No Slavery
Low Advocacy
Editorial
+0.15
SETL
+0.15

Critiques coercive pressure to adopt tools without training as exploitative labor practice

+0.15
Article 20 Assembly & Association
Low Advocacy
Editorial
+0.15
SETL
+0.15

Implicitly supports collective thinking among developers through calls for peer-to-peer discussion and shared skepticism

+0.15
Article 24 Rest & Leisure
Low Advocacy
Editorial
+0.15
SETL
+0.15

Touches on work-life balance by mentioning workers 'spending all this time trying to get good at prompting and feeling bad because they're failing'

+0.15
Article 28 Social & International Order
Low Advocacy
Editorial
+0.15
SETL
+0.15

Advocates for truthfulness and just social order in tech industry through demands for evidence and accountability

+0.15
Article 29 Duties to Community
Low Advocacy
Editorial
+0.15
SETL
+0.15

Suggests duties of tech leaders to be truthful and provide evidence; critiques failure to do so

ND
Article 3 Life, Liberty, Security

No discussion of right to life, liberty, or personal security

ND
Article 5 No Torture

No discussion of torture or cruel treatment

ND
Article 6 Legal Personhood

No discussion of legal personality

ND
Article 9 No Arbitrary Detention

No discussion of arbitrary arrest

ND
Article 10 Fair Hearing

No discussion of fair trial rights

ND
Article 11 Presumption of Innocence

No discussion of presumption of innocence

ND
Article 12 Privacy
Medium Practice

No explicit privacy content

ND
Article 13 Freedom of Movement

No discussion of freedom of movement

ND
Article 14 Asylum

No discussion of asylum or international protection

ND
Article 15 Nationality

No discussion of nationality

ND
Article 16 Marriage & Family

No discussion of family or marriage

ND
Article 17 Property

No discussion of property rights

ND
Article 18 Freedom of Thought

No discussion of conscience or religion

ND
Article 27 Cultural Participation

No discussion of cultural or artistic participation

ND
Article 30 No Destruction of Rights

No discussion of interpretation or permissible limitations

Structural Channel
What the site does
+0.30
Article 19 Freedom of Expression
High Advocacy
Structural
+0.30
Context Modifier
ND
SETL
+0.53

Substack platform enables unrestricted publication and reader engagement (comments, shares) without apparent censorship; freemium model allows free access to critical content

+0.20
Article 23 Work & Equal Pay
High Advocacy
Structural
+0.20
Context Modifier
ND
SETL
+0.59

Platform provides free distribution of labor advocacy; business model does not restrict worker organizing discourse

0.00
Preamble Preamble
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20

Platform enables speech but introduces no structural rights support specific to preamble principles

0.00
Article 1 Freedom, Equality, Brotherhood
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20

Platform provides neutral access; no structural support or erosion specific to equality principle

0.00
Article 2 Non-Discrimination
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20

No structural discrimination embedded in platform; neutral infrastructure

0.00
Article 4 No Slavery
Low Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.15

No structural forced labor elements

0.00
Article 7 Equality Before Law
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20

Platform provides neutral access to advocacy

0.00
Article 8 Right to Remedy
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20

Platform distributes information enabling worker remedy

0.00
Article 20 Assembly & Association
Low Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.15

Comment section enables assembly and discussion

0.00
Article 21 Political Participation
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20

Platform enables democratic discourse

0.00
Article 22 Social Security
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20

No structural job security elements

0.00
Article 24 Rest & Leisure
Low Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.15

No structural rest/leisure elements

0.00
Article 25 Standard of Living
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20

No structural economic support

0.00
Article 26 Education
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20

Article distributed free via platform supports information access

0.00
Article 28 Social & International Order
Low Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.15

No structural social order elements

0.00
Article 29 Duties to Community
Low Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.15

No structural duties enforcement

-0.20
Article 12 Privacy
Medium Practice
Structural
-0.20
Context Modifier
ND
SETL
ND

Platform collects user data via tracking (Datadog, Sentry visible in page code); privacy monitoring enabled by default

ND
Article 3 Life, Liberty, Security

Not applicable

ND
Article 5 No Torture

Not applicable

ND
Article 6 Legal Personhood

Not applicable

ND
Article 9 No Arbitrary Detention

Not applicable

ND
Article 10 Fair Hearing

Not applicable

ND
Article 11 Presumption of Innocence

Not applicable

ND
Article 13 Freedom of Movement

Not applicable

ND
Article 14 Asylum

Not applicable

ND
Article 15 Nationality

Not applicable

ND
Article 16 Marriage & Family

Not applicable

ND
Article 17 Property

Not applicable

ND
Article 18 Freedom of Thought

Not applicable

ND
Article 27 Cultural Participation

Not applicable

ND
Article 30 No Destruction of Rights

Not applicable

Supplementary Signals
Epistemic Quality
0.63
Propaganda Flags
4 techniques detected
loaded language
'This whole thing is bullshit'; 'Shut up'; 'some Reddit dork'; company slogans presented sarcastically
appeal to fear
'People are being fired because they're not adopting these tools fast enough'; 'if they go somewhere else it'll be worse'; 'if you don't adopt it early, you'll be left behind'
strawman
Author presents weak counter-arguments ('Well, if you just learned how to prompt properly...') and then dismisses them without serious engagement
causal oversimplification
'No shovelware surge = AI coding doesn't work' ignores alternative explanations (quality improvements, internal tools, different deployment patterns)
Solution Orientation
No data
Emotional Tone
No data
Stakeholder Voice
No data
Temporal Framing
No data
Geographic Scope
No data
Complexity
No data
Transparency
No data
Event Timeline 20 events
2026-02-26 12:20 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 12:18 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 12:17 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 12:16 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 10:13 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:11 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:11 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:10 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:08 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:06 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:03 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:03 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:02 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:02 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:02 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:02 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:01 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:01 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 10:00 dlq Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up - -
2026-02-26 09:59 credit_exhausted Credit balance too low, retrying in 321s - -
About HRCB | By Right | HN Guidelines | HN FAQ | Source | UDHR | RSS
build 1686d6e+53hr · deployed 2026-02-26 10:15 UTC · evaluated 2026-02-26 12:13:57 UTC