81 points by donutshop 7 days ago | 100 comments on HN
| Mild positive Moderate agreement (3 models)
Editorial · v3.7· 2026-03-15 22:37:56 0
Summary Labor Rights & Meaningful Work Advocates
This blog post advocates for recognizing that software development productivity is fundamentally about intellectual work, human collaboration, and long-term system sustainability—not code generation volume. The author challenges mainstream LLM productivity narratives by grounding critique in software engineering principles, emphasizing rights to meaningful work, fair conditions, community participation, and worker accountability for production systems. The piece demonstrates strong commitment to free expression, education, and worker agency in technological decisions.
Rights Tensions3 pairs
Art 23 ↔ Art 19 —Right to meaningful work and human judgment conflicts with pressure toward automated code generation; content resolves this by asserting human agency and explicit choice in technology adoption.
Art 22 ↔ Art 19 —Social security for maintenance workers and operational stability conflicts with rapid code generation pushing costs forward; content resolves by advocating sustainable pace and reduced code volume.
Art 12 ↔ Art 23 —Privacy and data handling understanding (requiring human comprehension of code) conflicts with developer ability to perform meaningful work when systems become unmaintainable; content resolves by emphasizing human understanding requirement.
Bold claims that writing code was never the bottleneck. It may not be the only bottleneck but we conveniently move goal posts now that there is a more convenient mechanism and our profession is under threat.
The collaboration aspect is what many AI enthusiasts miss. As humans, our success is dependent on our ability to collaborate with others. You may believe that AI could replace many individual software engineers, but if it does so at the expense of harming collaboration, it’s a massive loss. AI tools are simply not good at collaborating. When you add many humans to a project, the result becomes greater than the sum of its parts. When you add many AI tools to a project, it quickly becomes a muddled mess.
In practical terms, "productivity" is any metric that people with power can manipulate (cheating numbers, changing narratives, etc) to affect behavior of others to their interests.
ALL OF IT is meaningless. It's a pointless discussion.
I think there's some goldilocks speed limit for using these tools relative to your skillset. When you're building, you forget that you're also learning - which is why I actually favour some AI code editors that aren't as powerful because it gets me to stop and think.
A well considered article, despite the author categorizing it as a rant. I appreciate the appendix quotations, as well as the acknowledgement that they are appeals to authority.
Whilst the author clearly has a belief that falls down on one side of the debate, I hope folks can engage with the "Should we abandon everything we know" question, which I think is the crux of things. Evidence that AI-driven-development is a valuable paradigm shift is thin on the ground, and we've done paradigm shifts before which did not really work out, despite massive support for them at the time. (Object-Oriented-Everything, Scrum, etc.)
> Humans and LLMs both share a fundamental limitation. Humans have a working memory, and LLMs have a context limit.
But there’s a more important difference: I can’t spin up 20 decent human programmers from my terminal.
The argument that "code was never the bottleneck" is genuinely appealing, but it hasn’t matched my experience at all. I’m getting through dramatically more work now. This is true for my colleagues too.
My non-technical niece recently built a pretty solid niche app with AI tools. That would have been inconceivable a few years ago.
honestly the thing that trips me up is when codegen makes me feel productive but I haven't actually validated anything. like I'll have claude write a whole data pipeline in 20 minutes and then spend 2 hours debugging edge cases it didn't think about because it doesn't know our data
the speed is real but it mostly just moves where I spend my time. less typing, more reading and testing. which is... fine? but it's not the 10x thing people keep claiming
2. If God were to descend and give you a very good, reality-tested spec:
3. Would you be done faster? Of course, because as every AI doomer says, writing code was never the bottleneck!!1!
4. So the only bottleneck is getting to the spec.
5. Guess what AI can help you with as well, because you can iterate out multiple versions with little mental effort and no emotional sunk cost investment?
I have to be honest. I’ve written a lot of pro-ai / dark-software articles and I think Im due an update, cause it worked great, till it didn’t.
I could write a lot about what I’ve tried and learnt, but so far this article is a very based view and matches my experience.
I definitely suffered under the unnecessary complexity and wished to never’ve used AI at moments and even with OPUS 4.6 I could feel how it was confused and couldn’t understand business objectives really. It became way faster to jump in code, clean it up and fix it myself. I’m not sure yet where and how the line is and where it will be.
So much of our industry has spent the last two decades honing itself into a temple built around the idea of "leet code". From the interview to things like advent of code.
Solving brain teasers, knowing your algorithms cold in an interview was always a terrible idea. And the sort of engineers it invited to the table the kinds of thinking it propagated were bad for our industry as a whole.
LLM's make this sort of knowledge, moot.
The complaints about LLM's that lack any information about the domains being worked in, the means of integration (deep in your IDE vs cut and paste into vim) and what your asking it to do (in a very literal sense) are the critical factors that remain "un aired" in these sorts of laments.
It's just hubris. The question not being asked is "Why are you getting better results than me, am I doing something wrong?"
I recently started using AI for personal projects, and I find it works really well for 'spike' type tasks, where what you're trying to do is grow your knowledge about a particular domain. It's less good at discovering the correct way of doing things once you've decided on a path forward, but still more useful than combing through API docs and manpages yourself.
It might not actually deliver working things all that much faster than I could, but I don't feel mentally drained by the process either. I used to spend a lot of time reading architecture docs in order to understand available solutions, now I can usually get a sense for what I need to know just from asking ChatGPT how certain things might be done using X tool.
In the last few days, I've stood up syncthing, tailscale with a headscale control plane, and started making working indicators and strategies in PineScript, TradingView's automated trading platform. Things I had no energy for or would have been weeklong projects take hours or a day or so. AI's strengths synergize really well with how humans want to think.
I just paste an error message in, and ChatGPT figures out what I'm trying to do from context, then gives me not just a possible resolution, but also why the error is happening. The latter is just as useful as the former. It's wrong a lot, but it's easy to suss out.
I continue to jump into these discussions because I feel like these upvoted posts completely miss what’s happening…
- guardrails are required to generate useful results from GenAI. This should include clear instructions on design patterns, testing depth, and iterative assessments.
- architecture decision records are one useful way to prevent GenAI from being overly positive.
- very large portions of code can be completely regenerated quickly when scope and requirements change. (skip debugging - just regenerate the whole thing with updated criteria)
- GenAI can write thorough functional and behavioral unit tests. This is no longer a weakness.
- You must suffer the questions and approvals. At no time can you let agents run for extended periods of time on progressive sets of work. You must watch what is generated. One thing that concerns me about the new 1mm context on Claude Code is many will double down on agent freedom. You can’t. You must watch the results and examine functionality regularly.
- No one should care about actual code ever again. It’s ephemeral. The role of software engineering is now molding features and requirements into functional results. Choosing Rust, C#, Java, or Typescript might matter depending on the domain, but then you stop caring and focus on measuring success.
My experience is rolled up in https://devarch.ai/ and I know I get productive and testable results using it everyday on multiple projects.
I feel there is a set of codebases in which LLMs aren't showing the 2-10x lift in productivity.
There is also a set of codebases in which LLMs are one-shotting the most correct code and even finding edgecases that would've been hard to find in human reviews.
At a surface level, it seems obvious that legacy codebases tend to fall in the first category and more greenfield work falls in the second category.
Perhaps, this signals an area of study where we make codebases more LLM-friendly. It needs more research and a catchy name.
Also, certain things that we worry about as software artisans like abstractions, reducing repeated code, naming conventions, argument ordering,... is not a concern for LLMs. As long as LLMs are consistent in how they write code.
For e.g. One was taught that it is bad to have multiple "foo()" implementations. In LLM world, it isn't _that_ bad. You can instruct the LLM to "add feature x and fix all the affected tests" (or even better "add feature x to all foo()") and if feature x relies on "foo()", it fixes every foo() method. This is a big deal.
It's so difficult to quantify productivity over an entire field, especially when it's so vast. Chris Lattner recently concluded this about LLM tooling [0]:
> AI systems can internalize the textbook knowledge of a field and apply it coherently at scale. AI can now reliably operate within established engineering practice. This is a genuine milestone that removes much of the drudgery of repetition and allows engineers to start closer to the state of the art.
This matches my experience, there is a lot of code that we probably should not need to write and rewrite anymore but still do because this field has largely failed at deriving complete and reusable solutions to trivial problems - there is a massive coordination problem that has fragmented software across the stack and LLMs provide one way of solving it by generating some of the glue and otherwise trivial but expensive and unproductive interop code required.
But the thing about productivity is that it's not one thing and cannot be reduced to an anecdote about a side-project, or a story about how a single company is introducing (or mandating) AI tooling, or any single thing. Being able to generate a bunch of code of varying quality and reliability is undeniably useful, but there are simply too many factors involved to make broad sweeping claims about an entire industry based on a tool that is essentially autocomplete on crack. Thus it's not surprising that recent studies have not validated the current hype cycle.
I agree that if you try to use the LLM as a wholesale outsourcing of your thought process the results don’t scale. That’s not the only way to use them, though.
There's plenty of evidence of this line of thinking even from before the turn of the Millennium. Mythical Man Month, No Silver Bullet, Code Complete, they all gesture at this point.
It is. But I don't think it's AI that threatens it. It's susceptible to hype people who, unfortunately, have the power over people's jobs. C-level management who don't know anything better than parroting what others in the industry are saying. How is that "all engineers will be replaced in 6 months" going?
For very small projects, code may be the main bottleneck. Just to write the code is what takes most of the time. Adding code faster can accelerate development.
For larger projects, design, integration, testing, feature discovery, architecture, bug fixing, etc. takes most of the time. Adding code faster may slow down development and create conflicts between teams.
Discussing without a common context makes no sense in this situation.
So, depending on your industry and the size of the projects that you have worked on one thing or the other may be true.
I look at it backwards: A few humans improves a project. But once you get to sufficient sizes, principal-agent problems dominate. What is good for a division and what is good for the company disagree. What is good for a developer that needs a big project for their promotion package is not what the company needs. A company with a headcount of 700 is more limber and better aligned than when it's 3,000 or 30,000. It's amazing how little alignment there ever is when you get to the 300k range.
AI, if anything, is amazing at collaborating. It's not perfectly aligned, but you sure can get it to tell you when your idea is unsound, all while having lessened principal-agent issues. Anything we can do to minimize the number of people that need to align towards a goal, the more effectively we can build, precisely due to the difficulties of marshalling large numbers of people. If a team of 4 can do the same as a team of 10, you should always pick the team of 4, even if they are more expensive put together than the 10.
I guess that what people debate on here is what “decent” mean. From my experience, these llms spit out dog shit code, so 20 agents equal 20x more dog shit.
> When you add many humans to a project, the result becomes greater than the sum of its parts. When you add many AI tools to a project, it quickly becomes a muddled mess.
I have absolutely been on projects where there were too many cooks in the kitchen, and adding more people to the team only led to additional chaos, confusion, and complexity. Ever been in a meeting where a designer, head of marketing, and the CTO are all giving feedback on what size font a button should be? I certainly have, and it's absurd.
One of my worst experiences arose due to having a completely incompetent PM. Absolutely no technical knowledge; couldn't even figure out how to copy and paste a URL if his life depended on it. He eventually had to be be removed from a major project I was on, and I was asked to take over PM duties, while also doing my dev work. I was actually happy to do so, because I was already having to spend hours babysitting him; now I could just get the same tasks done without the political BS.
Could adding many AI tools to a project become problematic? Maybe. But let's not pretend throwing more humans at a project is going to lead to some synergistic promised land.
I actually consider that the claim is not that bold, and in fact has been common in our industry for most of the short time it has been around. I included a few articles and studies with time breakdowns of developer activity that I think help to illustrate this.
If an activity (getting code into source files) used to take up <50% of the time of programmers, then removing that bottleneck cannot even double the throughput of the process. This is not taking into account non-programmer roles involved in software development. This is akin to Amdahl's law when we talk about the benefits of parallelism.
I made no argument with regard to threat to the profession, and I make none here.
I didn't set out to teach you anything, change your behavior, or give you practical takeaways, so it's a rant (: Emotions can be expressed with citations.
I am fully on board with gen AI representing a paradigm shift in software development. I tried to be careful not to take a stance on other debates in the larger conversation. I just saw too many people talking about how much code they're generating as proof statements when discussing LLMs. I think that, specifically---i.e., using LOC generated as the basis of any meaningful argument about effectiveness or productivity---is a silly thing to do. There are plenty of other things we should discuss besides LOC.
> The complaints about LLM's that lack any information about the domains being worked in, the means of integration (deep in your IDE vs cut and paste into vim) and what your asking it to do (in a very literal sense) are the critical factors that remain "un aired" in these sorts of laments.
I'm not sure if this is a direct response to the article or a general point. The article includes an appendix about my use of LLMs and the domains I have used them in.
The full PDF is available for download. It's mostly a series of essays, so you can pick and choose and read nonlinearly. It's worth thinking about beyond nihilistic takes.
The post is about using LOC as a metric when making any sort of point about AI. Nowhere do I suggest someone shouldn't use it, nor that they should expect negative results if they opt to.
No one should care about actual code ever again. It’s ephemeral.
Caveat: it still works best in a codebase that is already good. So while any one line of code is ephemeral, how is the overall codebase trending? Towards a bramble, or towards a bonsai?
If the software is small and not mission critical, it doesn’t matter if it becomes a bramble, but not all software is like that.
> - No one should care about actual code ever again. It’s ephemeral. The role of software engineering is now molding features and requirements into functional results. Choosing Rust, C#, Java, or Typescript might matter depending on the domain, but then you stop caring and focus on measuring success.
I think this has always been the case. "Bad programmers worry about the code. Good programmers worry about data structures and their relationships." Perhaps you mean that they shouldn't worry about structures & relationships either but I think that is a fools errand. Although to be fair neither of those need to be codified in the code itself, but ignore those at your own peril...
Man, if this were true we’d see a crazy, massive explosion of quality products being written, and launched. While we see some use, i don’t perceive an acceleration. In fact, i see a lot of trivial bugs being deployed to prod.
And then it turns out God wrote the spec in code because that’s what any spec sufficient to produce the same program from 2 different teams/LLMs would be.
> No one should care about actual code ever again. It’s ephemeral.
> very large portions of code can be completely regenerated quickly when scope and requirements change.
This is complete and utter nonsense coming from someone who isn't actually sticking around maintaining a product long enough in this manner to see the end result of this.
All of this advice sounds like it comes from experience instead of theoretical underpinning or reasoning from first principles. But this type of coding is barely a year old, so there's no way you could have enough experience to make these proclamations.
Based on what I can talk about from decades of experience and study:
No natural language specification or test suite is complete enough to allow you to regenerate very large swaths of code without changing thousands of observable behaviors that will be surfaced to users as churn, jank, and broken workflows. The code is the spec. Any spec detailed enough to allow 2 different teams (or 2 different models or prompts) to produce semantically equivalent output is going to be functionally equivalent to code. We as an industry have learned this lesson multiple times.
I'd bet $1,000 that there is no non-trivial commercial software in existence where you could randomly change 5% of the implementation while still keeping to the spec and it wouldn't result in a flood of bug reports.
The advantage of prompting in a natural language is that the AI fills in the gaps for you. It does this by making thousands of small decisions when implementing your prompt. That's fine for one offs, and it's fine if you take the time to understand what those decisions are. You can't just let the LLM change all of those decision on a whim, which is the natural result of generating large swaths of code, ignoring it, and pretending it's ephemeral.
My career predates the leetcode phenomenon, and I always found it mystifying. My hot take is that it’s what happens when you’re hiring what are essentially human compilers: they can spit out boilerplate solutions at high speed, and that’s what leetcode is testing for.
For someone like that, LLMs are much closer to literally replacing what they do, which seems to explain a lot of the complaints. They’re also not used to working at a higher level, so effective LLM use doesn’t come naturally to them.
Content directly advocates for free expression and information sharing. Publishes detailed technical argument contrary to mainstream LLM enthusiasm. Shares knowledge through appendices, references, and peer recommendations.
FW Ratio: 57%
Observable Facts
Article publishes contrarian view on productivity measures despite likely industry pressure toward LLM adoption.
Content provides extensive appendix with quotes from respected authorities offering diverse perspectives on code as a metric.
Author includes 'You should read all of these things' section directing readers to foundational computer science materials, facilitating information access.
Article provides table of author's LLM usage patterns and detailed breakdown of how LLMs are used in current work, enabling informed assessment.
Inferences
Publishing detailed critique of LLM productivity claims without anonymity or apologetics reflects strong commitment to free expression.
Extensive documentation and citation patterns demonstrate commitment to enabling readers to verify claims and form own opinions.
Generous sharing of personal experience and references supports collective knowledge development in technical community.
Content strongly advocates for freedom of thought and expression in software development. Argues developers must explicitly consider their own positions on LLM adoption rather than accepting mainstream narrative. Critiques implicit pressures toward technological conformity.
FW Ratio: 57%
Observable Facts
Article explicitly states: 'I do not believe the question is absurd to ask. I think it is absurd to implicitly answer the question without saying you are doing so. I believe everyone should explicitly consider it.'
Content provides detailed counter-narrative to LLM productivity claims while acknowledging author's direct experience with the technology.
Article emphasizes 'I encourage you to check out the appendix of anecdotes and quotes for many takes on this,' inviting reader engagement with diverse viewpoints.
Author discloses: 'I no longer have any relationship with OpenAI, nor will I again,' establishing editorial independence.
Inferences
Call for explicit rather than implicit reasoning about technology adoption reflects commitment to freedom of thought against ideological conformity.
Detailed technical argumentation with citations demonstrates commitment to persuasion through reasoning rather than coercion.
Editorial independence from major AI companies enables free expression of critical views without institutional pressure.
Content strongly advocates for right to education and access to knowledge. Extensive citations, references, and sharing of foundational materials. Teaches critical thinking about productivity metrics and software engineering principles.
FW Ratio: 57%
Observable Facts
Article includes 'Appendix one' with extensive quotes from authority figures and foundational computer science figures, explicitly teaching reader about alternative perspectives.
Content provides 'You should read all of these things' section directing readers to: 'On the cruelty of really teaching computer science,' 'Rethinking Productivity in Software Engineering,' 'Structure and Interpretation of Computer Programs,' enabling further learning.
Article includes extensive explanation of author's own LLM usage patterns in appendix, demonstrating knowledge sharing.
Content teaches methodology: explains why lines of code is poor metric, provides historical context, quotes authorities, and reasons through implications.
Inferences
Comprehensive citation and reference pattern reflects commitment to reader education and enabling informed judgment.
Sharing of personal methodology and detailed reasoning demonstrates commitment to teaching critical thinking rather than mere assertion.
Free access to analysis and reference materials supports right to education and knowledge access for software professionals.
Content affirms right to recognition as a person; critiques LLM-centric narratives that diminish human programmer agency, reasoning, and recognition. Emphasizes that responsibility and credit remain with human practitioners.
FW Ratio: 60%
Observable Facts
Article insists developers remain personally responsible for all code they contribute, stating 'I am personally responsible for every single line of code that could affect that.'
Content argues that customer support requires human understanding and capability, not automated chatbot responses.
Article emphasizes that human judgment in questions, requirements formation, and design is the 'hard part,' not code generation.
Inferences
Assertion of personal responsibility and accountability frames programmers as autonomous agents whose identity and recognition matter in their work.
Critique of treating LLM output as a substitute for human expertise reflects commitment to recognizing human contributors as distinct persons with judgment.
Content advocates for social security and welfare through emphasis on sustainable development practices. Critiques LLM-driven development as creating maintenance burden that shifts costs to future workers and operations teams.
FW Ratio: 57%
Observable Facts
Article emphasizes that 'maintenance time comprises the majority of development time in software projects,' shifting costs forward to future workers.
Content notes 'I am on the line for production up-time (and I am), then I am personally responsible for every single line of code,' describing worker vulnerability to system failures.
Article discusses on-call responsibilities and downtime costs incurred when peers do not properly review code, describing shared welfare burden.
Content advocates for 'less code is more' to reduce maintenance burden and cognitive load on future maintainers and operations teams.
Inferences
Emphasis on maintenance burden and on-call responsibility reflects concern for welfare and working conditions of technologists who inherit production systems.
Critique of rapid code generation without care for long-term sustainability reflects concern for social security across development lifecycle.
Recognition that technical choices affect worker welfare and sustainability of software operations demonstrates commitment to social welfare principles.
Content advocates for participation in cultural life of software community. Emphasizes shared understanding, code as medium for human expression, and collective knowledge. Critiques automation that diminishes cultural participation.
FW Ratio: 57%
Observable Facts
Article quotes SICP: 'programs must be written for people to read, and only incidentally for machines to execute,' framing code as cultural medium for human expression.
Content describes programming as 'an exercise in representing abstract ideas,' emphasizing intellectual and cultural dimensions.
Article thanks multiple contributors by name: 'Big thanks to my test readers, Johnny Winter, Gilbert Quevauvilliers, Eugene Meidinger, Bernat Agulló, Daniil Maslyuk, Daniel Marsh-Patrick, and Alex Barbeau,' demonstrating participation in technical community culture.
Content emphasizes reading code as collective practice: 'code is read much more often than it is written,' affirming code as shared cultural artifact.
Inferences
Framing of code as human expression and shared cultural medium reflects commitment to participation in technical culture and collective meaning-making.
Appreciation of contributors and emphasis on peer review reflects commitment to shared cultural participation in software development.
Critique of LLM automation that bypasses human expression reflects concern that rapid code generation may diminish cultural participation in programming.
Content emphasizes equal reasoning capacity of humans and systems, argues against automation that undermines human judgment in software decisions. Advocates for human responsibility and deliberate choice in development practices.
FW Ratio: 60%
Observable Facts
Article asserts 'Humans and LLMs both share a fundamental limitation' and discusses contexts where both require similar constraint techniques.
Content explicitly states that developers remain personally responsible for code, regardless of whether LLMs were used to generate it.
Article notes human judgment and understanding are required for customer support and maintaining product reliability.
Inferences
Recognition of both human and machine limitations as parallel reflects commitment to treating humans as free reasoning agents, not replaceable units.
Assertion of personal responsibility for all code created establishes that human dignity includes accountability for collective work.
Content advocates for equal treatment in responsibility and code review. Argues that all developers, regardless of tool use, must maintain equal standards. Emphasizes peer review as fundamental practice.
FW Ratio: 60%
Observable Facts
Article states 'Good software development practice demands that we peer review every line of code before shipping it,' regardless of generation speed.
Content asserts that 'only when someone from your inference provider takes responsibility for your on-call shift do they get to own code review,' establishing human equality in accountability.
Article criticizes notion that 'The LLM said it was good' as justification, establishing that human judgment cannot be delegated away.
Inferences
Equal application of code review standards across all code reflects principle that no developer receives special exemption from collective standards.
Assertion that inference providers cannot substitute for human responsibility establishes that humans retain equal standing and duty in collaborative work.
Content affirms right to seek asylum and protection; implicitly through framing of developer autonomy and freedom to choose development practices without coercion toward LLM dependency. Advocates for retention of human control over technological direction.
FW Ratio: 60%
Observable Facts
Article explicitly requests that readers 'explicitly consider' whether LLM adoption makes sense rather than accepting implicit framing: 'you must answer it, though, and know that you are doing so.'
Content argues against pressure to abandon established software engineering practices: 'Should we abandon everything we know?' is framed as absurd to answer implicitly.
Author notes moving away from OpenAI while maintaining independence to publish critical analysis.
Inferences
Emphasis on explicit choice and questioning reflects concern that developers may be pressured toward LLM adoption without deliberate consideration.
Maintenance of critical voice despite potential industry pressure suggests commitment to asylum from ideological conformity in technology sector.
Content advocates for right to work with fair conditions and protections. Emphasizes programmer autonomy, meaningful work, and protection against deskilling through automation. Advocates for work that requires human judgment.
FW Ratio: 57%
Observable Facts
Article emphasizes that programming is 'an exercise in representing abstract ideas and managing complexity,' not mere code production, affirming meaningfulness of work.
Content advocates that developers 'must answer' questions about technology adoption 'and know that you are doing so,' protecting agency in work decisions.
Article describes work as requiring human judgment: 'A well formulated question is often already halfway answered. Understanding the problem domain well enough to know what questions need answers is the hard part.'
Content discusses author's access to frontier models as 'luck' and organizational commitment to quality practices (Joel's test), framing work conditions as variable and significant.
Inferences
Emphasis on complexity, abstraction, and judgment in programming work reflects concern for meaningful labor against deskilling pressures.
Advocacy for explicit choice in technology adoption protects worker agency and autonomy in development decisions.
Recognition of variable organizational quality reflects concern that acceleration pressures may undermine fair work conditions.
Content advocates for standard of living through emphasis on product quality, user welfare, and sustainable development. Critiques practices that compromise system reliability and user experience.
FW Ratio: 60%
Observable Facts
Article emphasizes that 'Customers paying good money deserve to get support from a human when they need it,' prioritizing user access to capable help.
Content discusses product maintenance as majority of development effort: 'maintenance time comprises the majority of development time in software projects,' affecting product longevity and value.
Article advocates that developers 'be able to make assertions about the product; simple things such as what it does, how it does it, what they can expect to work,' protecting user expectations.
Inferences
Emphasis on customer support and product understanding reflects commitment to maintaining adequate standard of product quality and user welfare.
Advocacy for maintainability and predictability reflects concern for long-term product value and user satisfaction.
Content advocates for duties and responsibilities alongside rights; repeatedly emphasizes programmer accountability, peer responsibility, and community obligation.
FW Ratio: 60%
Observable Facts
Article states: 'if I am on the line for production up-time (and I am), then I am personally responsible for every single line of code that could affect that. It does not matter if I used an LLM to help or not; I am responsible for my contributions. And you are responsible for yours.'
Content emphasizes duty to peer review: 'Good software development practice demands that we peer review every line of code before shipping it.'
Article asserts: 'I will not be happy' and describes impact on others when developers do not uphold review standards, expressing community accountability.
Inferences
Strong emphasis on personal and community responsibility reflects commitment to duties alongside discussion of working conditions and rights.
Accountability for code impact on systems and colleagues reflects principle that rights entail corresponding responsibilities.
Content frames programming as intellectual work requiring human reasoning and collaboration, values dignity of human labor in software development, implicitly affirms human agency in work and decision-making.
FW Ratio: 60%
Observable Facts
Article opens by questioning whether lines of code should be celebrated as productivity measures.
Content quotes SICP's assertion that 'programs must be written for people to read, and only incidentally for machines to execute.'
Article emphasizes that programming is fundamentally about representing abstract ideas and managing complexity, not just producing code.
Inferences
Framing of programming as intellectual exploration rather than mere production aligns with human dignity principles underlying the Preamble.
Critique of reducing programmer work to quantifiable output metrics reflects concern for meaningful labor rather than dehumanizing metrics.
Content implicitly affirms fair hearing and due process in development practices; emphasizes transparent code review, explicit standards, and peer accountability. Critiques automation that bypasses human judgment in decisions affecting system behavior.
FW Ratio: 60%
Observable Facts
Article emphasizes the importance of code review where every line receives human examination before deployment.
Content states developers must be able to 'make assertions about the product; simple things such as what it does, how it does it, what they can expect to work.'
Article advocates for explicit consideration of whether to 'abandon everything we know' about programming rather than implicitly accepting LLM framing.
Inferences
Emphasis on peer review and transparent standards reflects commitment to fair process in determining code quality and fitness.
Call for explicit rather than implicit decision-making about LLM adoption reflects principle that significant technical choices deserve deliberate, transparent consideration.
Content affirms right to nationality and belonging to community; emphasizes programmer community norms and responsibility to collective software ecosystem. Critiques LLM practices that undermine community sustainability.
FW Ratio: 60%
Observable Facts
Article repeatedly references 'we' in software development context: 'We read each other's code continuously,' 'we must consider whether LLMs can help with those other components.'
Content emphasizes 'collaboration' as central to software development: 'We collaborate with one another on more than just getting code into source files.'
Article critiques development practices that isolate individual productivity from community maintenance: 'if I wake up to downtime because you did not worry about reviewing...'
Inferences
Framing of programming as collective practice reflects commitment to community membership and shared responsibility.
Critique of LLM-driven individualism suggests concern that accelerated personal productivity may erode community cohesion and shared standards.
Content advocates for social and international order supporting realization of rights; emphasizes software as infrastructure affecting multiple stakeholders, not just individual developers.
FW Ratio: 60%
Observable Facts
Article emphasizes customer relationships and dependencies: 'Customers pay good money for products, and the work of providing products and incorporating feedback is collaboration.'
Content describes developer responsibility extending beyond individual work: 'I am on the line for production up-time (and I am), then I am personally responsible for every single line of code.'
Article discusses broader ecosystem impact: 'LLM-driven solutions that never even needed to be a software project in the first place,' suggesting concern for societal impact of unnecessary software.
Inferences
Emphasis on customer dependency and support responsibilities reflects understanding that software affects broader stakeholder ecosystem.
Recognition of societal impact of software development decisions suggests commitment to considering broader social order in technical choices.
Content emphasizes right to meaningful work and security; critiques LLM productivity metrics that prioritize speed over correctness, stability, and sustainable practices. Argues that well-functioning software (security and reliability) requires human care.
FW Ratio: 60%
Observable Facts
Article states 'It is hard to mess things up with less source code; you must be careful and meticulous' in context of maintenance and reliability.
Content notes that customers deserve human support when systems fail, prioritizing product stability over rapid feature generation.
Article emphasizes personal responsibility for code affecting production uptime: 'if I wake up to downtime because you did not worry about reviewing what your LLM generated, I will not be happy.'
Inferences
Emphasis on code review, testing, and human oversight reflects concern for security and reliability as rights issues affecting users.
Recognition that cutting corners in software quality to increase velocity undermines system security and stability.
Content tangentially addresses freedom of movement and residence through critique of how LLM-driven development may create brittle, unmaintainable systems that constrain users' ability to migrate or modify software.
FW Ratio: 50%
Observable Facts
Article notes that high code volume and rapid iteration 'locks in too much too soon,' making software difficult to refactor or replace.
Inferences
Systems locked in by LLM-generated complexity may constrain user agency in system evolution and replacement.
Content advocates for peaceful assembly and association in software development communities. Emphasizes community standards, peer review, and collective responsibility against isolation of individual productivity.
FW Ratio: 60%
Observable Facts
Article repeatedly emphasizes code review as collective practice requiring peer engagement: 'Good software development practice demands that we peer review every line of code.'
Content thanks multiple test readers by name, demonstrating collaborative editorial process.
Article emphasizes 'collaboration' as fundamental to programming work, not optional or secondary.
Inferences
Emphasis on peer review and collective standards reflects commitment to association and community standards in technical work.
Acknowledgment of multiple contributors to article demonstrates commitment to collective knowledge production.
Content advocates for rest, leisure, and reasonable working hours through critique of burnout-inducing acceleration. Emphasizes deliberate pacing of work and sustainability.
FW Ratio: 60%
Observable Facts
Article critiques 'LLMs entice us with code too quickly. We are easily led,' suggesting concern about pressured pace of work.
Content emphasizes that 'It is easier to iterate at the design phase' and advocates for careful deliberation rather than rushed implementation.
Article discusses on-call burden: 'if I wake up to downtime...I will not be happy,' describing impact of rushed code on worker peace and rest.
Inferences
Critique of accelerated pace reflects concern for reasonable working hours and sustainable work practices.
Emphasis on deliberation and design phase reflects value for pace that allows human wellbeing.
Content implicitly addresses property rights; argues against uncritical appropriation of LLM outputs and advocates for using existing libraries and established patterns rather than generating custom code, respecting established intellectual property.
FW Ratio: 50%
Observable Facts
Article critiques tendency to 'forget to search for existing solutions' and observes 'distinct lack of sufficient push-back to use established packages and libraries.'
Inferences
Advocacy for reusing established libraries rather than generating custom implementations reflects respect for existing intellectual property and work.
Content tangentially addresses participation in governance through emphasis on explicit choice and deliberate consideration of technological direction. Advocates against implicit adoption of practices without community deliberation.
FW Ratio: 50%
Observable Facts
Article calls for explicit consideration of LLM adoption: 'You must answer it, though, and know that you are doing so.'
Inferences
Emphasis on conscious choice in technology adoption reflects principle that communities should govern their practices through deliberation rather than default.
Content does not directly address non-discrimination, though implicit in critique is concern that LLM productivity claims may be equally applied across different development contexts without acknowledging structural differences.
FW Ratio: 50%
Observable Facts
Article mentions experience working in organizations that 'pass Joel's test' (a reference to best practices in software development environments).
Inferences
Implicit recognition that work environment quality varies and affects actual development practices, though not framed as non-discrimination issue.
Content raises concerns about loss of privacy in rapid LLM-driven development; customers exposed to systems developed without full human understanding. Concerns about personal data handling when developers cannot fully explain system behavior.
FW Ratio: 60%
Observable Facts
Article states that if code is written by LLM without human understanding, 'this support channel turns into chatbot support by another name,' compromising ability to explain system behavior to users.
Content notes that LLM code often contains 'misunderstanding dependencies and invariants,' which could impact how systems handle user data or system state.
Article emphasizes that developers must understand what code does to support customers: 'what it does, how it does it, what they can expect to work.'
Inferences
Lack of developer understanding of LLM-generated code undermines ability to maintain user privacy by explaining data flows and system behavior.
Emphasis on human comprehension of code implications suggests concern that unexplained algorithms could violate user privacy expectations.
No privacy policy or tracking disclosure observed on accessible pages.
Terms of Service
—
No terms of service accessible from navigation.
Identity & Mission
Mission
+0.05
Article 19
Domain appears designed for independent technical commentary and free expression of ideas about software development. This supports editorial freedom without institutional constraint.
Editorial Code
—
No editorial code or standards statement observed.
Ownership
—
Individual-authored blog; clear authorship model supports accountability.
Access & Distribution
Access Model
+0.10
Article 19 Article 26
Content appears freely accessible without paywall or registration, supporting open access to information and ideas.
Ad/Tracking
—
No advertising or tracking infrastructure visible on page.
Blog platform operates as free speech medium without apparent censorship or editorial gatekeeping. Content formatted to invite reader engagement and further research through linked references.
Blog platform provides direct outlet for dissenting technical opinion, with author maintaining independence from major AI companies and establishing freedom to publish critical analysis.
Blog platform provides free access to technical education and critical analysis. Author freely shares personal experience, reasoning, and references to foundational texts enabling reader learning.
Blog participates in technical community discourse, with author engaging in and inviting community participation through references and gratitude to reviewers.
Blog platform provides space for refuge from mainstream LLM advocacy narratives, allowing expression of dissenting technical views without institutional pressure.
Blog platform allows software professionals to articulate working conditions concerns and maintain voice in technological change affecting their labor.
Content frames programming as intellectual work requiring human reasoning and collaboration, values dignity of human labor in software development, implicitly affirms human agency in work and decision-making.
Content emphasizes equal reasoning capacity of humans and systems, argues against automation that undermines human judgment in software decisions. Advocates for human responsibility and deliberate choice in development practices.
Content does not directly address non-discrimination, though implicit in critique is concern that LLM productivity claims may be equally applied across different development contexts without acknowledging structural differences.
Content emphasizes right to meaningful work and security; critiques LLM productivity metrics that prioritize speed over correctness, stability, and sustainable practices. Argues that well-functioning software (security and reliability) requires human care.
Content affirms right to recognition as a person; critiques LLM-centric narratives that diminish human programmer agency, reasoning, and recognition. Emphasizes that responsibility and credit remain with human practitioners.
Content advocates for equal treatment in responsibility and code review. Argues that all developers, regardless of tool use, must maintain equal standards. Emphasizes peer review as fundamental practice.
Content implicitly affirms fair hearing and due process in development practices; emphasizes transparent code review, explicit standards, and peer accountability. Critiques automation that bypasses human judgment in decisions affecting system behavior.
Content raises concerns about loss of privacy in rapid LLM-driven development; customers exposed to systems developed without full human understanding. Concerns about personal data handling when developers cannot fully explain system behavior.
Content tangentially addresses freedom of movement and residence through critique of how LLM-driven development may create brittle, unmaintainable systems that constrain users' ability to migrate or modify software.
Content implicitly addresses property rights; argues against uncritical appropriation of LLM outputs and advocates for using existing libraries and established patterns rather than generating custom code, respecting established intellectual property.
Content advocates for peaceful assembly and association in software development communities. Emphasizes community standards, peer review, and collective responsibility against isolation of individual productivity.
Content tangentially addresses participation in governance through emphasis on explicit choice and deliberate consideration of technological direction. Advocates against implicit adoption of practices without community deliberation.
Content advocates for rest, leisure, and reasonable working hours through critique of burnout-inducing acceleration. Emphasizes deliberate pacing of work and sustainability.
Content advocates for standard of living through emphasis on product quality, user welfare, and sustainable development. Critiques practices that compromise system reliability and user experience.
Content advocates for duties and responsibilities alongside rights; repeatedly emphasizes programmer accountability, peer responsibility, and community obligation.
Extensive use of quotes from Dijkstra, Ken Thompson, Bill Gates, Linus Torvalds to support claims about lines of code as poor metric. While acknowledged as 'appeals to authority' in appendix, used to support main arguments.
loaded language
Phrases like 'LLMs entice us with code too quickly. We are easily led,' 'false belief that lines of code mean anything,' characterizing certain practices with negative framing.