| ||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||
| Article Heatmap Negative Neutral Positive No Data |
| Aggregates
Evidence: High: 3 Medium: 12 Low: 3 No Data: 12 | Theme Radar | ||||||||||||||||||||||||||||
| Domain Context Profile
|
|
HN Discussion
20 top-level comments
Writing code has been cheap for a while now. Writing good software is still expensive. It's going to take everybody a while to figure that out (just like with outsourcing) I don't agree that the code is cheap.
It doesn't require a pipeline of people to be trained and that is huge, but it's not cheap. Tokens are expensive. We don't know what the actual cost is yet. We have startups, who aren't turning a profit, buying up all the capacity of the supply chain. There are so many impacts here that we don't have the data on. I'm going to shill my own writing here [1] but I think it addresses this post in a different way. Because we can now write code so much faster and quicker, everything downstream from that is just not ready for it. Right now we might have to slow down, but medium and long term we need to figure out how to build systems in a way that it can keep up with this increased influx of code. > The challenge is to develop new personal and organizational habits that respond to the affordances and opportunities of agentic engineering. I don't think it's the habits that need to change, it's everything. From how accountability works, to how code needs to be structured, to how languages should work. If we want to keep shipping at this speed, no stone can be left unturned. [1]: https://lucumr.pocoo.org/2026/2/13/the-final-bottleneck/ Code generation is cheap in the same way talk is cheap. Every human can string words together, but there's a world of difference between words that raise $100M and words that get you slapped in the face. The raw material was always cheap. The skill is turning it into something useful. Agentic engineering is just the latest version of that. The new skill is mastering the craft of directing cheap inputs toward valuable outcomes. Code is cheap is the same as saying "Buying on credit is easy". Code is a liability, not an asset. It's like the allegory of the retired consultant's $5000 invoice (hitting the thing with a hammer: $5, knowing where to hit it: $4995). Yeah, coding is cheaper now, but knowing what to code has always been the more expensive piece. I think AI will be able to help there eventually, but it's not as far along on that vector yet. > Code has always been expensive. Producing a few hundred lines of clean, tested code takes most software developers a full day or more. Many of our engineering habits, at both the macro and micro level, are built around this core constraint. > ... > Writing good code remains significantly more expensive I think this is a bad argument. Code was expensive because you were trying to write the expensive good code in the first place. When you drop your standards, then writing generated code is quick, easy and cheap. Unless you're willing to change your standard, getting it back to "good code" is still an equivalent effort. There are alternative ways to define the argument for agentic coding, this is just a really really bad argument to kick it off. The cost of code never lived in the typing — it lived in the intent, the constraints, and the reasoning that shaped it.
LLMs make the typing cheap, but they don’t make the reasoning cheap.
So the economics shift, but the bottleneck doesn’t disappear. I basically fully agree with this. I am not sure how to handle the ramifications of this in my day to day work yet. But at least one habit I have been forming is sometimes I find that even though the cost of writing code is immensely cheap, reviewing and validating that it works in certain code bases (like the millions of line mono repo I work in at my job) is extremely high. I try to think through, and improve, our testability such that a few hundred line of code change that modifies the DB really can be a couple of hours of work. Also, I do want to note that these little "Here is how I see the world of SWE given current model capabilities and tooling" posts are MUCH appreciated, given how much you follow the landscape. When a major hype wave is happening and I feel like I am getting drowned on twitter, I tend to wonder "What would Simon say about this?" Every modern (and not so modern) software development method hinge on one thing: requirements are not known and even if known they'll change over time. From this you get the goal of "good" code which is "easy to change code". Do current LLM based agents generate code which is easy to change? My gut feeling is a no at the moment. Until they do I'd argue code generated from agents is only good for prototypes. Once you can ask your agent to change a feature and be 100% sure they won't break other features then you don't care about how the code looks like. Here's an easy to understand example. I've been playing EvE Online and it has an API with which you can query the game to find information on its items and market (as well as several other unrelated things). It seems like a prime example for which to use AI to quickly generate the code. You create the base project and give it the data structures and calls, and it quickly spits out a solution. Everything is great so far. Then you want to implement some market trading, so you need to calculate opportunities from the market orders vs their buy/sell prices vs unit price vs orders per day etc. You add that to the AI spec and it easily creates a working solution for you. Unfortunately once you run it it takes about 24 hours to update, making it near worthless. The code it created was very cheap, but also extremely problematic. It made no consideration for future usage, so everything from the data layer to the frontend has issues that you're going to be fighting against. Sure, you can refine the prompts to tell it to start modifying code, but soon you're going to be sitting with more dead code than actual useful lines, and it will trip up along the way with so many more issues that you will have to fix. In the end it turns out that that code wasn't cheap at all and you needed to spend just as much time as you would have with "expensive code". Even worse, the end product is nearly still just as terrible as the starting product, so none of that investment gave any appreciable results. Partially why I’m surprised there isn’t more focus on coding harnesses that lean towards strong typing / testing / quasi formal verification type paradigms If you could funnel it through something like that then the ability to generate vast amounts of code is a lot more commercially useful Sponsored by: Teleport — Secure, Govern, and Operate AI at Engineering Scale Gee what a surprise. > Code has always been expensive. Producing a few hundred lines of clean, tested code takes most software developers a full day or more. Many of our engineering habits, at both the macro and micro level, are built around this core constraint. Wasn't writing code always cheap? I see this more like a strawmen argument. What is clean code? Tested code? Should each execution path of a function be tested with each possible input? I think writing tests is important but you can over do it. Testing code for every possible platform takes of course much time and money. Another cost factor for code is organization overhead, if adding a new feature needs to go through each layer of the organization signing it off before a user can actually see it. Its of course more costly than the alternative of just pushing to production with all its faults. There is a big difference of short term cost and long term ones. I think LLMs reduce the short time cost immensely but may increase the long term costs. It will take some real long studies to really show the impact. Each line of code is a liability. I think it’s funny that we’re all measuring lines of code now and smiling. It was/is expensive because engineers are trying to manage the liability exposure of their employers. Agents give us a fire hose of tech debt that anyone can point at production. I don’t think the tool itself is bad. But I do think people need to reconsider claims like this and be more careful about building systems where an unaccountable program can rewrite half your code base poorly and push it to production without any guard rails. Not sure if “code has always been expensive” is the right framing. Typing out a few hundred lines of code was never the real bottleneck. What was expensive was everything around it: making it correct, making it maintainable (often underestimated), coordinating across teams and supporting it long term. You can also overshoot: Testing every possible path, validating across every platform, or routing every change through layers of organizational approval can multiply costs quickly. At some point, process (not code) becomes the dominant expense. What LLMs clearly reduce is the short-term cost of producing working code. That part is dramatically cheaper. The long-term effect is less clear. If we generate more code, faster, does that reduce cost or just increase the surface area we need to maintain, test, secure, and reason about later? Historically, most of software’s cost has lived in maintenance and coordination, not in keystrokes. It will take real longitudinal data to see whether LLMs meaningfully change that, or just shift where the cost shows up. The key is what we consider good code. Simon’s list is excellent, but I’d push back on this point: > it does only what’s needed, in a way that both humans and machines can understand now and maintain in the future We need to start thinking about what good code is for agents, not just for humans. For a lot of the code I’m writing I’m not even “vibe coding” anymore. I’m having an agent vibe code for me, managing a bunch of other agents that do the actual coding. I don’t really want to look at the code, just as I wouldn’t want to look at the output of a C compiler the way my dad did in the late ’80s. Over the last few decades we’ve evolved a taste for what good code looks like. I don’t think that taste is fully transferable to the machines that are going to take over the actual writing and maintaining of the code. We probably want to optimize for them. code has never been expensive. ridiculous asks are expensive. Not understanding limitations of computer systems are expensive. The main problem is, and always will be communication. engineers are in general are quick to say "that won't work as you described" because they can see the steps that it takes to get there. Sales guys (CEOs) live a completely different world and they "hear" "I won't do that" from technical types. It's the ultimate impedance mismatch and the subject of countless seminars. AI writing code at least reduces the cost of the inevitable failures, but doesn't solve the root problem. Successful business will continue be those who's CTO/CEO relationship is a true partnership. There's a lot of misconception about the intrinsic economic value of 'writing code' in these conversations. In software, all the economic value is in the information encoded in the code. The instructions on precisely what to do to deliver said value. Typically, painstakingly discovered over months or years of iteration. Which is exactly why people pay for it when you've done it well, because they cannot and will not rediscover all that for themselves. Writing code, per se, is ultimately nothing more than mapping that information. How well that's done is a separate question from whether the information is good in the first place, but the information being good is always the dominant and deciding factor in whether the software has value. So obviously there is a lot of value in the mapping - that is writing the code - being done well and, all else being equal, faster. But putting that cart before the horse and saying that speeding this up (to the extent this is even true - a very deep and separate question) has some driving impact on the economics of software I think is really not the right way to look at it. You don't get better information by being able to map the information more quickly. The quality of the information is entirely independent of the mapping, and if the information is the thing with the economic value, you see that the mapping being faster does not really change the equation much. A clarifying example from a parallel universe might be the kind of amusing take about consultancy that's been seen a lot - that because generative AI can produce things like slides, consultancies will be disrupted. This is an amusingly naive take precisely because it's so clear that the slides in and of themselves have no value separate from the thing clients are actually paying for: the thinking behind the content in the slides. Having the ability to produce slides faster gets you nothing without the thinking. So it is in software too. I’d like to add an obvious point that is often overlooked. LLMs take on a huge portion of the work related to handling context, navigating documentation, and structuring thoughts. Today, it’s incredibly easy to start and develop almost any project. In the past, it was just as easy to get overwhelmed by the idea of needing a two-year course in Python (or any other field) and end up doing nothing. In that sense, LLMs help people overcome the initial barrier, a strong emotional hurdle, and make it much easier to engage in the process from the very beginning. |
| Score Breakdown |
|