Summary Labor Rights & Fair Working Conditions Advocates
This editorial critiques widespread claims about AI coding productivity, arguing that evidence contradicts the 10x productivity gains promised by vendors and enthusiasts. The author presents personal testing data and industry metrics showing no corresponding surge in software output, arguing that false productivity narratives harm developers through unjustified layoffs, wage suppression, and workplace pressure. The content strongly advocates for worker skepticism, evidence-based decision-making, and protection of labor rights against exploitation.
This makes some sense. We have CEOs saying they're not hiring developers because AI makes their existing ones 10X more productive. If that productivity enhancement was real, wouldn't they be trying to hire all the developers? If you're getting 10X the productivity for the same investment, wouldn't you pour cash into that engine like crazy?
Perhaps these graphs show that management is indeed so finely tuned that they've managed to apply the AI revolution to keep productivity exactly flat while reducing expenses.
There is actually a lot of AI shovelware on Steam. Sort by newest releases and you'll see stuff like a developer releasing 10 puzzle games in one day.
I have the same experience as OP, I use AI every day including coding agents, I like it, it's useful. But it's not transformative to my core work.
I think this comes down to the type of work you're doing. I think the issue is that most software engineering isn't in fields amenable to shovelware.
Most of us either work in areas where the coding is intensely brownfield. AI is great but not doubling anyone's productivity. Or, in areas where the productivity bottlenecks are nowhere near the code.
1. LLMs do not increase general developer productivity by 10x across the board for general purpose tasks selected at random.
2. LLMs dramatically increases productivity for a limited subset of tasks
3. LLMs can be automated to do busy work and although they may take longer in terms of clock time than a human, the work is effectively done in the background.
LLMs can get me up to speed on new APIs and libraries far faster than I can myself, a gigantic speedup. If I need to write a small bit of glue code in a language I do not know, LLMs not only save me time, but they make it so I don't have to learn something that I'll likely never use again.
Fixing up existing large code bases? Productivity is at best a wash.
Setting up a scaffolding for a new website? LLMs are amazing at it.
Writing mocks for classes? LLMs know the details of using mock libraries really well and can get it done far faster than I can, especially since writing complex mocks is something I do a couple times a year and completely forget how to do in-between the rare times I am doing it.
Navigating a new code base? LLMs are ~70% great at this. If you've ever opened up an over-engineered WTF project, just finding where HTTP routes are defined at can be a problem. "Yo, Claude, where are the route endpoints in this project defined at? Where do the dependency injected functions for auth live?"
Right tool, right job. Stop using a hammer on nails.
Shovelware may not be a good way to track additional productivity.
That said, I’m skeptical that AI is as helpful for commercial software. It’s been great for in automating my workflow because I suck at shell scripting and AI is great at it. But most of the code I write I honestly halfway don’t know what I’m going to write until I write it. The prompt itself is where my thinking goes - so the time savings would be fairly small, but I also think I’m fairly skilled (except at scripting).
This tracks with my own experience as well. I’ve found it useful in some trivial ways (eg: small refactors, type definition from a schema, etc.) but so far tasks more than that it misses things and requires rework, etc. The future may make me eat my words though.
On the other hand, I’ve lately seen it misused by less experienced engineers trying to implement bigger features who eagerly accept all it churns out as “good” without realizing the code it produced:
- doesn’t follow our existing style guide and patterns.
- implements some logic from scratch where there certainly is more than one suitable library, making this code we now own.
- is some behemoth of a PR trying to do all the things.
These claims wouldn't matter if the topic weren't so deadly serious. Tech leaders everywhere are buying into the FOMO, convinced their competitors are getting massive gains they're missing out on. This drives them to rebrand as AI-First companies, justify layoffs with newfound productivity narratives, and lowball developer salaries under the assumption that AI has fundamentally changed the value equation.
This is my biggest problem right now. The types of problems I'm trying to solve at work require careful planning and execution, and AI has not been helpful for it in the slightest. My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company". The mass hysteria among SVPs and PMs is absolutely insane right now, I've never seen anything like it.
In case the author is reading this, I have the receipts on how there's a real step function in how much software I build, especially lately. I am not going to put any number on it because that makes no sense, but I certainly push a lot of code that reasonably seems to work.
The reason it doesn't show up online is that I mostly write software for myself and for work, with the primary goal of making things better, not faster. More tooling, better infra, better logging, more prototyping, more experimentation, more exploration.
Here's my opensource work: https://github.com/orgs/go-go-golems/repositories . These are not just one-offs (although there's plenty of those in the vibes/ and go-go-labs/ repositories), but long-lived codebases / frameworks that are building upon each other and have gone through many many iterations.
I completely agree with the thesis here. I also have not seen a massive productivity boost with the use of AI.
I think that there will be neurological fatigue occurring whereby if software engineers are not actively practicing problem-solving, discernment, and translation into computer code - those skills will atrophy...
Yee, AI is not the 2x or 10x technology of the future ™ is was promised to be. It may the case that any productivity boost is happening within existing private code bases. Even still, there should be a modest uptick in noticeably improved offer deployment in the market, which does not appear to be there.
In my consulting practice I am seeing this phenomenon regularly, wereby new founders or stir crazy CTOs push the use of AI and ultimately find that they're spending more time wrangling a spastic code base than they are building shared understanding and working together.
I have recently taken on advisory roles and retainers just to reinstill engineering best practices..
Most of it doesn't exist beyond videos of code spraying onto a screen alongside a claim that "juniors are dead."
I think the "why" for this is that the stakes are high. The economy is trembling. Tech jobs are evaporating. There's a high anxiety around AI being a savior, and so, a demi-religion is forming among the crowd that needs AI to be able to replace developers/competency.
That said: I personally have gotten impressive results with AI, but you still need to know what you're doing. Most people don't (beyond the beginner -> intermediate range), and so, it's no surprise that they're flooding social media with exaggerated claims.
If you didn't have a superpower before AI (writing code), then having that superpower as a perceived equalizer is something that you will deploy all resources (material, psychological, etc) to ensuring that everyone else maintain the position that 1) superpower good, 2) superpower cannot go away 3) the superpower being fallible should be ignored.
Like any other hype cycle, these people will flush out, the midpoint will be discovered, and we'll patiently await the next excuse to incinerate billions of dollars.
Great angle to look at the releases of new software. I, too, thought we'd see a huge increase by now.
An alternative theory is that writing code was never the bottleneck of releasing software. The exploration of what it is you're building and getting it on a platform takes time and effort.
On the other hand, yeah, it's really easy to 'hold it wrong' with AI tools. Sometimes I have a great day and think I've figured it out. And then the next day, I realize that I'm still holding it wrong in some other way.
It is philosophically interesting that it is so hard to understand what makes building software products hard. And how to make it more productive. I can build software for 20 years and still feel like I don't really know.
I need to agree with the author, with a caveat. He is a well developed developer. For somebody like him, churning out good quality code is probably easy.
Where i expect to see a lot of those metrics of feeling fast come from, is from people who may have less coding experience, and with AI are coding way above their level.
My brother in law asks for a nice product website, i just feed his business plan into a LLM, do some fine tuning on the results, and have a good looking website in a hour time. If i did it myself manually, just take me behind a barn as those jobs are so boring and take for ages. But i know that website design is a weakness of mine.
That is the power of LLMs. Turn out quick code, maybe offer some suggestion you did not think about, but ... it also eats time! Making your prompts so that the LLM understands, waiting for the result, ... waiting ... ok, now check the result, can you use it? O no, it did X, Y, Z wrong. Prompt again ... and again. And this is where your productivity goes to die.
So when you compare a pool of developer feedback, your going to get a broad "it helps a lot", "some", "is worse then my code", ... mix in with the prompting, result delays etc...
It gets even worse with Agent / Vibe coding, as you just tend to be waiting, 5, 10min for changed to be done. You need to review them, test them, ... o no, the LLM screwed something up again. O no, it removed 50% of my code. Hey, where did my comments go. And we are back to a loss of time.
LLMs are a tool... But after a lot of working with them, my opinion is to use them when needed but do not depend on them for everything. I sometimes look with cow eyes when people say they are coding so much with LLMs and spending 200, or more bucks per month.
They can be powerful tools, but i feel that some folks become so over dependent on them. And worst is my feeling that our juniors are going to be in a world of hurt, if their skills are more LLM monkey coding (or vibe coding), then actually understanding how to code (and the knowledge behind the actual programming languages and systems).
The answer is that we're making it right now. AI didn't speed me up at all until agents got good enough, which was April/May of this year.
Just today I built a shovelware CLI that exports iMessage archives into a standalone website export. Would have taken me weeks. I'll probably have it out as a homebrew formula in a day or two.
I'm working on an iOS app as well that's MUCH further along than it would be if I hand-rolled it, but I'm intentionally taking my time with it.
Anyway, the post's data mostly ends in March/April which is when generative AI started being useful for coding at all (and I've had Copilot enabled since Nov 2022)
I think the explanation is simple: there is a direct correlation between being too lazy and demotivated to write your own code, and being too lazy and demotivated to actually finish a project and publish your work online.
The same people who are willing to go through all the steps to release an application online are also willing to go through the extra effort of writing their own code. The code is actually the easy part compared to the rest of it... always has been.
Specific example: Actually used a leet-code style algorithms implementation of memo-ization for branching. This would have taken a couple of days to implement by hand, but it took about 20 minutes to write the spec and 20 minutes to review solutions and merge the solution generated. If you're curious you can see this diff generated here: https://github.com/sutt/innocuous/commit/cdabc98
This article reminds me of two recent observations by Paul Krugman about the internet:
"So, here’s labor productivity growth over the 25 years following each date on the horizontal axis [...] See the great productivity boom that followed the rise of the internet? Neither do I. [...] Maybe the key point is that nobody is arguing that the internet has been useless; surely, it has contributed to economic growth. The argument instead is that its benefits weren’t exceptionally large compared with those of earlier, less glamorous technologies."¹
"On the second, history suggests that large economic effects from A.I. will take longer to materialize than many people currently seem to expect [...] And even while it lasted, productivity growth during the I.T. boom was no higher than it was during the generation-long boom after World War II, which was notable in the fact that it didn’t seem to be driven by any radically new technology [...] That’s not to say that artificial intelligence won’t have huge economic impacts. But history suggests that they won’t come quickly. ChatGPT and whatever follows are probably an economic story for the 2030s, not for the next few years."²
On one hand I don't understand what all the fuss is about.
LLMs are great at all kinds of things around and about: searching for (good) information, summarizing existing text, conceptual discussions where it points you in the right directions very quickly, etc. ..... they are just not great (some might say harmful) at straight up non-trivial code generation or design of complex systems with the added peculiarity that on the surface the models seem almost capable to do it but never quite ... which is sort their central feature: producing text so that it is seems correct from statistical perspective, but without actual reasoning.
On the other hand, I do understand that the things the LLMs are really great at is not actually all that spectacular to monetize ... and so as a result we have all these snake oil salesmen on every corner boasting about nonsensical vibecoding achievements, because that's where the real money would be ... if it were really true ... but it is not.
I used to be a full-time developer back in the day. Then I was a manager. Then I was a CTO. I stopped doing the day-to-day development and even stopped micro-managing the detailed design.
When I tried to code again, I found I didn't really have the patience for it -- having to learn new frameworks, APIs, languages, tricky little details, I used to find it engrossing: it had become annoying.
But with tools like Claude Code and my knowledge about how software should be designed and how things should work, I am able to develop big systems again.
I'm not 20% more productive than I was. I'm not 10x more productive than I was either. I'm infinity times more productive because I wouldn't be doing it at all otherwise, realistically: I'd either hire someone to do it, or not do it, if it wasn't important enough to go through the trouble to hire someone.
Sure, if you are a great developer and spend all day coding and love it, these tools may just be a hindrance. But if you otherwise wouldn't do it at all they are the opposite of that.
I'm not sure what to make of these takes because so many people are using such an enormous variety of LLM tooling in such a variety of ways, people are going to get a variety of results.
Let's take the following scenario for the sake of argument: a codebase with well-defined AGENTS.md, referencing good architecture, roadmap, and product documentation, and with good test coverage, much of which was written by an LLM and lightly reviewed and edited by a human. Let's say for the sake of argument that the human is not enjoying 10x productivity despite all this scaffolding.
Is it still worthwhile to use LLM tooling? You know what, I think a lot of companies would say yes. There are way too many companies whose codebases lack testing and documentation, that are too difficult to on-board new engineers and have too high risk if the original engineers are lost. The simple fact that LLMs, to be effective, force the adaptation of proper testing and documentation is a huge win for corporate software.
The problem with current GenAI is the same as in outsourcing to lowest bidder in India or whatever. For any non-trivial project you'll get something that may appear to work out of it, but for anything production-ready you'll most likely you'll spend lots of time testing, verifying, cleaning up the code and making changes to things AI didn't catch. Then there's requirement gathering, discussing with stakeholders, gathering more feedback and so on, debugging when things fail in production...
I believe it's a productivity boost, but only to a small part of my job. The boost would be larger if only had to build proof-of-concepts or hobby projects that don't need to be reliable in prod, and don't require feedback and requirements from many other people.
This reminds me of something... I'm a jazz musician when not being a coder, and have studied and taught from/to a lot of players. One thing advanced improvisors notice is that the student is very frequently not a good judge – in the moment – of what is making them better. Doing long term analytics tests (as the author did) works, but knowing how well something is working while you're doing it? not so much. Very, very frequently that which feels productive isn't, and that which feels painful and slow is.
Just spit balling here, but it sure feels similar.
Same. On many days 90% of my code output by lines is Claude generated and things that took me a day now take well under an hour.
Also, a good chunk of my personal OSS projects are AI assisted. You probably can't tell from looking at them, because I have strict style guides that suppress the "AI style", and I don't really talk about how I use AI in the READMEs. Do you also expect I mention that I used Intellisense and syntax highlighting too?
> implements some logic from scratch where there certainly is more than one suitable library, making this code we now own - is some behemoth of a PR trying to do all the things
Depending on the amount of code, I see this only as positive? Too often people pull huge libraries for 50 lines of code.
I mean the truth should be fairly obvious to people given a lot of the talk around AI stuff rings very much like the ifls/mainstream media style "science" articles which always make some outrageous "right around the corner" claim based off some small tidbit out of a paper they only skimmed the abstract of.
Granted, _discovery_ of such things is something I'm still trying to solve at my own job and potentially llms can at least be leveraged to analyse and search code(bases) rather than just write it.
It's difficult because you need team members to be able to work quite independently but knowledge of internal libraries can get so siloed.
Also when vou create a product you can’t speed up the iterative process of seeing how users want it, fixing edge cases that you only realized later etc. these are the things that make a product good and why there’s that article about software taking 10 years to mature: https://www.joelonsoftware.com/2001/07/21/good-software-take...
Today you will learn what diminishing returns are :)
You can only utilize so many people or so much action within a business or idea.
Essentially it's throwing more stupid at a problem.
The reason there are so many layoffs is because of AI creating efficiency. The thing that people don't realize is it's not that one AI robot or GPU is going to replace one human at a one to one ratio. It's going to replace the amount of workload one person can do. Which in turn gets rid of one human employee. It's not that you job isn't taken by AI. It's started. But how much human is needed is where the new supply demand lies and how long the job lasts. There will always be more need for more creative minds. The issue is we are lacking them.
It's incredible how many software engineers I see walking around without jobs. Looking for a job making $100,000 to $200,000 a year. Meanwhile, they have no idea how much money they could save a business. Their creativity was killed by school.
They are relying on somebody to tell them what to do and when nobody's around to tell anybody what to do. They all get stuck. What you are seeing isn't a lack of capability. It's a lack of ability to control direction or create an idea worth following.
A lot of these C-suite people also expect the remaining ones to be replaced by AI. They subscribe to the hockey-stick "AGI is around the corner" narrative.
I don't, but at least it is somewhat logical. If you truly believe that, you wouldn't necessarily want to hire more developers.
This is the answer. Programming was never the bottleneck in delivering software, whether free-range, organic, grass-fed human-generated code or AI-assisted.
AI is just a convenient excuse to lay off many rounds of over-hiring while also keeping the door open for potential investors to throw more money into the incinerator since the company is now “AI-first”.
The experience in green field development is very different. In the early days of a project, the LLMs opinion is about as good as the individuals starting the project. The coding standards and other items have not yet been established. The buggy/half nonsense code means that the project is still demo able. Being able to explore 5 projects to demo status instead of 1 is a major boost.
> LLMs can get me up to speed on new APIs and libraries far faster than I can myself, a gigantic speedup. If I need to write a small bit of glue code in a language I do not know, LLMs not only save me time, but they make it so I don't have to learn something that I'll likely never use again.
I wax and wane on this one.
I've had the same feelings, but too often I've peaked behind the curtain, read the docs and got familiar with external dependencies and then realize whatever the LLM responds with paradoxically either wasn't following convention or tried to shoehorn your problem to fit code examples found online, used features inappropriately, took a long roundabout path to do something that can be done simply, etc.
It can feel like magic until you look too closely at it, and I worry that it'll make me complacent with the feeling of understanding without actually taking away an understanding.
At least in my experience, it excels in blank canvas projects. Where you've got nothing and want something pretty basic. The tools can probably set up a fresh React project faster than me. But at least every time I've tried them on an actual work repo they get reduced to almost useless.
Which is why they generate so much hype. They are perfect for tech demos, then management wonders why they aren't seeing results in the real world.
> I think that there will be neurological fatigue occurring whereby if software engineers are not actively practicing problem-solving, discernment, and translation into computer code - those skills will atrophy...
I've found this to be the case with most (if not all) skills, even riding a bike. Sure, you don't forget how to ride it, but your ability to expertly articulate with the bike in a synergistic and tool-like way atrophies.
If that's the case with engineering, and I believe it to be, it should serve as a real warning.
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company".
If we can delegate incident response to automated LLMs too, sure, why not. Let the CEO have his way and pay the reputational price. When it doesn't work, we can revert our git repos to the day LLMs didn't write all the code.
As the rate of profit drops, value needs to be squeezed out of somewhere and that will come from the hiring/firing and compensation of labor, hence a strong bias towards that outcome.
99% of the draw of AI is cutting labor costs, and hiring goes against that.
That said, I don't believe AI productivity claims, just pointing out a factor that could theoretically contribute to your hypothetical.
FWIW this closely matches my experience. I’m pretty late to the AI hype train but my opinion changed specifically because of using combinations of models & tools that released right before the cut off date for the data here. My impression from friends is that it’s taken even longer for many companies to decide they’re OK with these tools being used at all, so I would expect a lot of hysteresis on outputs from that kind of adoption.
That said I’ve had similar misgivings about the METR study and I’m eager for there to be more aggregate study of the productivity outcomes.
> LLMs can be automated to do busy work and although they may take longer in terms of clock time than a human, the work is effectively done in the background.
What is this supposed busy work that can be done in the background unsupervised?
I think it's about time for the AI pushers to be absolutely clear about the actual specific tasks they are having success with. We're all getting a bit tired of the vagueness and hand waving.
Agree. In the hands of a seasoned dev not only does productivity improve but the quality of outputs.
If I’m working against a deadline I feel more comfortable spending time on research and design knowing I can spend less time on implementation. In the end, it took the same amount of time, though hopefully with an increase of reliability, observability, and extendibility. None of these things show up in the author’s faulty dataset and experiment.
My theory is that the digital revolution has mostly cancelled out potential productivity gains with its introduction of productivity sinks: the technology has tended to encourage less rigorous thinking, more distraction, more complexity; and even if you can do task T X times faster, most people as spending X * Y more time being distracted, overwhelmed, or just reflective button pushers.
The ways AI is being used now will make this a lot worse on every front.
When working on anything, I am asked: what is the smallest "hard" problem that this is solving ? ie, in software, value is added by solving "hard" problems - not by solving easy problems. Another way to put it is: hard problems are those that are not "templated" ie, solved elsewhere and only need to be copied.
LLMs are allowing the easy problems to be solved faster. But the real bottleneck is in solving the hard problems - and hard problems could be "hard" due to technical reasons, or business reasons or customer-adoption reasons. Hard problems are where value lies particularly when everyone has access to this tool, and everyone can equally well create or copy something using it.
In my experience, LLMs have not yet made a dent in solving the hard problems because, they dont really have a theory of how something really works. On the other hand, they have really helped boost productivity for tasks that are templated .
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate
That's insane. Who the hell pulls a number out of their ass and declares it the new reality? When it doesn't happen, he'll pin the blame on you, but everyone else above will pin the blame on him. He's the one who will get fired.
Laying off unnecessary developers is the answer if LLMs turn out to make us all so much more productive (assuming we don't just increase the amount of software written instead). But that happens after successful implementation of LLMs into the development process, not in advance.
Starting to think I should do the inadvisable and move my investments far far away from the S&P 500 and into something that will survive the hype crash that can't be too far off now.
By "knowing what you're doing" do you mean "have enough experience to it by hand", "have experience with a specific AI tool and its limitations" or a combination?
my grand theory on AI coding tools is that they don't really save on time, but they massively save on annoyance. I can save my frustration budget for useful things instead of fiddling with syntax or compiler messages or repetitive tasks, and oftentimes this means I'll take on a task I would find too frustrating in an already frustrating world, or stay at my desk longer before needing to take a walk or ditch the office for the bar.
Strong exercise of free expression and opinion: presents evidence-based critique counter to mainstream industry narratives with confrontational language and specific demands for accountability
Observable Facts
Article presents detailed statistical evidence from multiple sources (METR study, Qodo, GitHub Archive, Statista) supporting a critical opinion counter to industry consensus.
Author uses confrontational rhetoric ('This whole thing is bullshit,' 'Shut up') and directly challenges company claims and developer assertions.
Platform provides public comment section enabling reader response and debate.
Inferences
The detailed evidence presentation demonstrates sophisticated exercise of expressive rights with substantive critique.
The confrontational framing and direct challenges are forms of robust public discourse enabled by the platform without restriction.
+0.70
Article 23Work & Equal Pay
High Advocacy
Editorial
+0.70
SETL
+0.59
Core argument of article: defends workers' right to fair wages and working conditions against exploitation through false productivity narratives used to justify layoffs and salary suppression; advocates for evidence-based assessment of work conditions
Observable Facts
Article explicitly states tech leaders are 'justify[ing] layoffs with newfound productivity narratives, and lowball[ing] developer salaries.'
Author criticizes forced tool adoption: 'I know what many of you chumps are going to say before you even say it... there are no indicators that prompting is hard to learn.'
Article references Coinbase CEO firing engineers who refuse to adopt AI tools, illustrating coercive labor practice.
Author provides personal productivity testing (coin flip methodology, 6-week trial) comparing AI vs. non-AI work speed.
Inferences
The sustained critique of wage suppression and layoffs based on unverified claims advocates for protection of fair compensation and employment security.
The demand for evidence-based management decisions implicitly asserts workers' right to fair treatment and protections against arbitrary economic decisions.
The exposure of tool coercion defends workers' right to choose working methods based on actual (not claimed) productivity.
+0.20
PreamblePreamble
Medium Advocacy
Editorial
+0.20
SETL
+0.20
Content advocates implicitly for 'freedom, justice and peace' through defense of workers against exploitative labor practices and false narratives
Observable Facts
Article frames worker dignity and fair treatment as central concerns in response to industry narratives about productivity gains.
Author invokes defense of workers' agency and decision-making autonomy as basis for argument.
Inferences
The implicit call for just treatment and truthful communication aligns with the preamble's vision of respect for human dignity.
+0.20
Article 1Freedom, Equality, Brotherhood
Medium Advocacy
Editorial
+0.20
SETL
+0.20
Defends developers' equal standing and professional dignity against devaluing narratives that suppress wages and justify unjust treatment
Observable Facts
Author emphasizes personal identity tied to programming work and value of 'shipping cool things,' establishing developer worth.
Article argues developers are unfairly devalued through salary suppression narratives without evidence.
Inferences
The defense of developer standing and critique of devaluation implicitly asserts equal professional dignity and respect.
+0.20
Article 2Non-Discrimination
Medium Advocacy
Editorial
+0.20
SETL
+0.20
Critiques discriminatory employment practices (layoffs, wage suppression) based on unverified productivity claims
Observable Facts
Article states that tech leaders justify layoffs and salary reduction based on false productivity narratives.
Author references Coinbase CEO firing engineers who refuse to use specific AI tools, illustrating coercive employment practice.
Inferences
The critique of employment decisions based on false premises implicitly opposes discrimination rooted in misleading information.
+0.20
Article 7Equality Before Law
Medium Advocacy
Editorial
+0.20
SETL
+0.20
Advocates for equal legal protection by demanding evidence-based decision-making and opposing arbitrary management actions
Observable Facts
Author calls for developers to 'demand they show receipts' when challenged with productivity claims, invoking evidence standard.
Inferences
The demand for evidence and accountability implicitly asserts equal protection against arbitrary employment decisions.
+0.20
Article 8Right to Remedy
Medium Advocacy
Editorial
+0.20
SETL
+0.20
Provides workers with data and evidence to challenge false productivity claims, empowering remedy and recourse
Observable Facts
Article presents statistical evidence (industry metrics, personal testing data) that developers can 'show your manager.'
Inferences
By providing workers with evidence and argumentative tools, the content enables effective remedy against unjust claims.
+0.20
Article 21Political Participation
Medium Advocacy
Editorial
+0.20
SETL
+0.20
Encourages workers to participate in workplace governance by challenging management claims with evidence and demanding accountability
Observable Facts
Article calls for developers to 'trust your gut,' 'show your manager these charts,' and 'demand they show receipts' when faced with productivity claims.
Author frames developer skepticism as form of legitimate workplace participation against unjustified management decisions.
Inferences
The calls for evidence-based challenge to management authority represent implicit exercise of workplace democratic rights.
+0.20
Article 22Social Security
Medium Advocacy
Editorial
+0.20
SETL
+0.20
Critiques threats to job security: 'People are being fired because they're not adopting these tools fast enough' and 'People are sitting in jobs they don't like because they're afraid'
Observable Facts
Article states: 'People are being fired because they're not adopting these tools fast enough.'
Article notes: 'People are sitting in jobs they don't like because they're afraid if they go somewhere else it'll be worse.'
Inferences
The documentation of job insecurity caused by management pressure advocates for protection of employment security as social right.
+0.20
Article 25Standard of Living
Medium Advocacy
Editorial
+0.20
SETL
+0.20
Addresses adequate standard of living through critique of salary suppression based on false productivity claims
Observable Facts
Article states tech leaders are 'lowball[ing] developer salaries under the assumption that AI has fundamentally changed the value equation.'
Inferences
The advocacy against salary suppression defends workers' right to adequate compensation and economic security.
+0.20
Article 26Education
Medium Advocacy
Editorial
+0.20
SETL
+0.20
Critiques lack of adequate education and training: 'Experiment and figure it out yourself is the common advice... the official prompting guides are apparently not worth paying attention to because they don't work'
Observable Facts
Article criticizes 'AI-First' companies: 'none of these reporting provide any training on how to become a 10xer with AI coding. Experiment and figure it out yourself is the common advice.'
Inferences
The critique of inadequate training and guidance advocates for workers' right to proper education in tools affecting their employment.
+0.15
Article 4No Slavery
Low Advocacy
Editorial
+0.15
SETL
+0.15
Critiques coercive pressure to adopt tools without training as exploitative labor practice
Observable Facts
Article describes 'AI-First coding shops' offering only 'Experiment and figure it out yourself' guidance without proper training.
Inferences
The critique of forced adoption without support touches on exploitation of worker vulnerability and lack of informed consent.
+0.15
Article 20Assembly & Association
Low Advocacy
Editorial
+0.15
SETL
+0.15
Implicitly supports collective thinking among developers through calls for peer-to-peer discussion and shared skepticism
Observable Facts
Article encourages developers to 'show your manager these charts and ask them what they think about it,' invoking peer discussion.
Inferences
The call for worker-to-manager dialogue and collective critique of management claims implicitly supports assembly and association rights.
+0.15
Article 24Rest & Leisure
Low Advocacy
Editorial
+0.15
SETL
+0.15
Touches on work-life balance by mentioning workers 'spending all this time trying to get good at prompting and feeling bad because they're failing'
Observable Facts
Article states: 'People are spending all this time trying to get good at prompting and feeling bad because they're failing.'
Inferences
The critique of excessive time spent on tool adoption reflects concern for worker rest and leisure rights.
+0.15
Article 28Social & International Order
Low Advocacy
Editorial
+0.15
SETL
+0.15
Advocates for truthfulness and just social order in tech industry through demands for evidence and accountability
Observable Facts
Article repeatedly demands evidence: 'demand they show receipts or shut the fuck up.'
Inferences
The insistence on truthful representation of productivity reflects advocacy for just and honest social relations.
+0.15
Article 29Duties to Community
Low Advocacy
Editorial
+0.15
SETL
+0.15
Suggests duties of tech leaders to be truthful and provide evidence; critiques failure to do so
Observable Facts
Article states: 'if someone — whether it's your CEO, your tech lead, or some Reddit dork — claims they're now a 10xer because of AI, that's almost assuredly untrue, demand they show receipts.'
Inferences
The accountability demand reflects belief in mutual duties of truthfulness between managers and workers.
ND
Article 3Life, Liberty, Security
No discussion of right to life, liberty, or personal security
ND
Article 5No Torture
No discussion of torture or cruel treatment
ND
Article 6Legal Personhood
No discussion of legal personality
ND
Article 9No Arbitrary Detention
No discussion of arbitrary arrest
ND
Article 10Fair Hearing
No discussion of fair trial rights
ND
Article 11Presumption of Innocence
No discussion of presumption of innocence
ND
Article 12Privacy
Medium Practice
No explicit privacy content
Observable Facts
Page code includes Datadog RUM and Sentry integration with automatic session replay at 100% rate.
Inferences
Broad tracking and session replay practices without explicit opt-in consent represent structural privacy intrusions.
ND
Article 13Freedom of Movement
No discussion of freedom of movement
ND
Article 14Asylum
No discussion of asylum or international protection
ND
Article 15Nationality
No discussion of nationality
ND
Article 16Marriage & Family
No discussion of family or marriage
ND
Article 17Property
No discussion of property rights
ND
Article 18Freedom of Thought
No discussion of conscience or religion
ND
Article 27Cultural Participation
No discussion of cultural or artistic participation
ND
Article 30No Destruction of Rights
No discussion of interpretation or permissible limitations
Structural Channel
What the site does
+0.30
Article 19Freedom of Expression
High Advocacy
Structural
+0.30
Context Modifier
ND
SETL
+0.53
Substack platform enables unrestricted publication and reader engagement (comments, shares) without apparent censorship; freemium model allows free access to critical content
+0.20
Article 23Work & Equal Pay
High Advocacy
Structural
+0.20
Context Modifier
ND
SETL
+0.59
Platform provides free distribution of labor advocacy; business model does not restrict worker organizing discourse
0.00
PreamblePreamble
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20
Platform enables speech but introduces no structural rights support specific to preamble principles
0.00
Article 1Freedom, Equality, Brotherhood
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20
Platform provides neutral access; no structural support or erosion specific to equality principle
0.00
Article 2Non-Discrimination
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20
No structural discrimination embedded in platform; neutral infrastructure
0.00
Article 4No Slavery
Low Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.15
No structural forced labor elements
0.00
Article 7Equality Before Law
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20
Platform provides neutral access to advocacy
0.00
Article 8Right to Remedy
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20
Platform distributes information enabling worker remedy
0.00
Article 20Assembly & Association
Low Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.15
Comment section enables assembly and discussion
0.00
Article 21Political Participation
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20
Platform enables democratic discourse
0.00
Article 22Social Security
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20
No structural job security elements
0.00
Article 24Rest & Leisure
Low Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.15
No structural rest/leisure elements
0.00
Article 25Standard of Living
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20
No structural economic support
0.00
Article 26Education
Medium Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.20
Article distributed free via platform supports information access
0.00
Article 28Social & International Order
Low Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.15
No structural social order elements
0.00
Article 29Duties to Community
Low Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.15
No structural duties enforcement
-0.20
Article 12Privacy
Medium Practice
Structural
-0.20
Context Modifier
ND
SETL
ND
Platform collects user data via tracking (Datadog, Sentry visible in page code); privacy monitoring enabled by default
ND
Article 3Life, Liberty, Security
Not applicable
ND
Article 5No Torture
Not applicable
ND
Article 6Legal Personhood
Not applicable
ND
Article 9No Arbitrary Detention
Not applicable
ND
Article 10Fair Hearing
Not applicable
ND
Article 11Presumption of Innocence
Not applicable
ND
Article 13Freedom of Movement
Not applicable
ND
Article 14Asylum
Not applicable
ND
Article 15Nationality
Not applicable
ND
Article 16Marriage & Family
Not applicable
ND
Article 17Property
Not applicable
ND
Article 18Freedom of Thought
Not applicable
ND
Article 27Cultural Participation
Not applicable
ND
Article 30No Destruction of Rights
Not applicable
Supplementary Signals
Epistemic Quality
0.63
Propaganda Flags
4techniques detected
loaded language
'This whole thing is bullshit'; 'Shut up'; 'some Reddit dork'; company slogans presented sarcastically
appeal to fear
'People are being fired because they're not adopting these tools fast enough'; 'if they go somewhere else it'll be worse'; 'if you don't adopt it early, you'll be left behind'
strawman
Author presents weak counter-arguments ('Well, if you just learned how to prompt properly...') and then dismisses them without serious engagement
causal oversimplification
'No shovelware surge = AI coding doesn't work' ignores alternative explanations (quality improvements, internal tools, different deployment patterns)
Solution Orientation
No data
Emotional Tone
No data
Stakeholder Voice
No data
Temporal Framing
No data
Geographic Scope
No data
Complexity
No data
Transparency
No data
Event Timeline
20 events
2026-02-26 12:20
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 12:18
rate_limit
OpenRouter rate limited (429) model=llama-3.3-70b
--
2026-02-26 12:17
rate_limit
OpenRouter rate limited (429) model=llama-3.3-70b
--
2026-02-26 12:16
rate_limit
OpenRouter rate limited (429) model=llama-3.3-70b
--
2026-02-26 10:13
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:11
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:11
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:10
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:08
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:06
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:03
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:03
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:02
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:02
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:02
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:02
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:01
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:01
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up
--
2026-02-26 10:00
dlq
Dead-lettered after 1 attempts: Where's the shovelware? Why AI coding claims don't add up