Ask HN: Is Programming as a Profession Cooked?

I have been mostly anti AI. I did experiment a bit with aider and free models, but my results were inconsistent, and nothing to worry about.

However, recently I have purchased the max plan from anthropic and have been vibing with Claude code since then. And wow, the results are very good. With a good enough prompt, and planning step, it could generate full features in a project with 20k LOC, with very little modifications needed by me after review.

I heard even more success stories from friends who gave Claude 3-4 different features that Claude would develop in parallel.

On top of that, everyone seems to produce side project at an astronomical rate, both among my friends, and here on HN where fully complete project that would take months to develop, seem to appear after few hours with Claude code.

So, my questions is, is programming as a profession cooked? Are most of us going to be replaced with a “supervisor” who runs coding agents all day?

11 points | by fnoef 7 hours ago

14 comments

  • dakiol 1 hour ago
    The moment my manager or some other non-technical stakeholders start to use AI to push software to production and it works fine, that's the moment I'll know for sure I'm out of business. I don't think that's really gonna happen soon, not because of the tech, but because non-technical stakeholders need at least one more generation to change their mindset. It doesn't really matter if we reach AGI, your manager (or their manager) is too busy to be hands-off with anything and will most likely delegate to some IC. That IC is me.
  • codingdave 2 hours ago
    My ancient background in Lotus Notes dev offers a different perspective than most. Yep, it is a hated platform now, but back in its heyday, it had similar business impact to what we are seeing from LLMs now. Non-tech folk could whip out basic CRUD apps in a few minutes without knowing how to code. And they did. Those apps worked well for trivial functions, when not scaled beyond a small userbase. Yet when people got one right that solved a real problem in an effective way, the userbase stated to scale, the warts in the apps became apparent, the original creator could not keep up with the changes needed, and those of us who were professionals would get called in to re-create the whole thing to a professional standard.

    I really see the same thing at our current level of AI. People are whipping out basic apps that work for small problems that are solved by small solutions. And it works. But without professionals intervening and correcting all the small problems along the way, it doesn't scale. Professional software engineers still need to exist to be sure that the solutions being created are scalable.

    Will we spend as much time typing out specific lines of code? Probably not. But will the jobs still be there? Absolutely. Perhaps even with more variety because we can focus more on the actual problems being solved. We will do more take-over work of apps that people got started but cannot finish. We'll refactor apps that got coded into corners, and spend more time talking directly to customers to understand what we are really trying to accomplish. It will be different work, but it will be there.

  • zelda420 31 minutes ago
    20k loc?!

    Most of the projects I’ve worked on in my career have been ten to hundreds of millions lines of code!

  • drooby 6 hours ago
    It's been about a decade years since many thought full-self-driving cars were "just a couple years away".

    Reality is that FSD was/is a "few decades away"

    Same for programming. We can take our hands off the steering wheel for longer stretches of time, this is true, but if you have production apps with real users that spend real money then going to sleep at the wheel is far too risky.

    Programmers will become the guardians and sentinels of the codebase, and their programming knowledge and debugging skills will still be necessary when the AI corners itself into thorny situations, or is unable to properly test the product.

    The profession is changing, no doubt about it. But its obsolescence is probably decades away.

    • savorypiano 3 hours ago
      The last 1% in FSD is an asymptotic challenge. The last 1% in a CRUD app codebase just requires a few engineers.
    • fnoef 6 hours ago
      Self-driving cars are a bad example, because we are talking about a heavily regulated industries, with fatal consequences of malpractice, and a tool (the car) that is not easily available to the average person. I'm pretty sure that is the cost of cars would be comparable to what a software engineer pays for claude code, government would relax on the laws, and as a society we would accept a few (tens of) thousands of casualties, self-driving cars would be already here.

      You talk about programming that become guardians, but I see two issues with this: (1) you don't need ten guardians, you need 1-2 that know your codebase; and (2) a "guardian" is someone who were junior, turned into senior, if juniors are no longer needed, in X years there will be no guardians to replace the existing ones.

      • arter45 4 hours ago
        Yes, it is an extreme example, but if your application(s) makes your company millions of dollars or euros, even if you are in a business that is not heavily regulated [1], mistakes or unavailability can cost a lot of money. Even if your company is not that big, mistakes in a crucial application everyone uses can cost time, money, even expose the company to legal trouble. "Self driving" coding in these situations is not ideal.

        [1] Even if your domain is not traditionally considered heavily regulated (military, banking,...) there is a surprising amount of "soft law" and "hard law" in everything from privacy to accounting and much more.

      • al_borland 1 hour ago
        Software in the transportation industry, medical field, and elsewhere is quite literally life and death.
      • gniv 6 hours ago
        A lot of the software produced in big corps is mission-critical. Self-driving cars are an extreme example but I think the same principle applies to banking, infrastructure, even things like maps, since they are used by billions.
  • bicx 3 hours ago
    Let go of any specific expectations of how you spend your days as an engineer. It’s likely going to be less and less coding by hand, a lot more code review, more taste-making (what code and features _should_ or _should not_ exist in a well-build system) a lot more opportunity to deepen adjacent skills. Personally, as a startup-oriented engineer, I am expanding my impact beyond traditional engineering and into broader product design, deeper UX experimentation, and adaptive architecture. I’m expanding my skills a bit into what used to only be the domain of PMs and designers. I also expect that PMs and designers will begin expanding their boundaries as well.
  • rupinderdev 2 hours ago
    Programming is dramatically accelerated with AI, making code implementations cheap and fast. However software engineering still depends on us for problem definition, system designs, debugging, etc. Profession isn't disappearing, developers just spend less time typing and more time thinking and deciding.
  • ben_w 6 hours ago
    Coding, or software engineering? Because the answer to each is different.

    As engineers, we can be the supervisor, doing code review, managing things at a higher level. Instead of choosing which libraries to do the work for us, we choose which LLM to write that code and we make sure those tests are all good and we insist they fix the failures they gloss over.

    As coders… well, right now it's only "mostly" taking over that, because there are still cases where the AI has no idea what it's doing, where it can* get the syntax right but the result is still useless. One example of this I've been trying recently is having an LLM do music generation**, both with "give me a python script to make a midi file" and Strudel (https://strudel.cc), and at this task it sucks much much worse than GPT-2 did with dungeon text adventures.

    I'm always on the lookout for the failure modes, because those failure modes are going to be my employment opportunities going forwards.

    Right now, if you're a coder who knows something else besides just coding, I think you can do useful work with the intersection that a lot of other coders without that side-interest would fail at even with LLM assistance. On the other hand, if your only side-interests are other forms of code, e.g. you want to make a game engine and a DSL for that game engine, but you're not fussed about writing any games with either, then you probably won't do well.

    * "can", not "will always" like we're used to in other domains.

    ** "why not use Suno?" I imagine you asking. Where Suno works well I like it, but it also has limits. Ask it for something outside its training domain… I've tried getting a 90 second sequence of animal noises with no instruments, it made something 140 seconds long consisting of only instruments and no animal noises.

    • arter45 4 hours ago
      >I'm always on the lookout for the failure modes, because those failure modes are going to be my employment opportunities going forwards.

      Exactly! I don't have a lot of experience with coding via LLMs, but lately I've been dabbling with that outside of my job precisely to find these failure modes... and they actually exist :)

  • mmphosis 2 hours ago
    Programming was "cooked" before the current Artificial Inference hype. I am not saying reducing the tedium is not beneficial. Where is the innovation, attribution and intelligence?
  • aristofun 6 hours ago
    Programming as in “monkey typing same boilerplate crap over and over again” or punching holes in cards if you prefer 60 year old analogy - yes.

    Programming as in “software engineering” - no. Because it isn’t about choosing the bext most probable word or pixel. At all.

    • arter45 5 hours ago
      Exactly. Also, you need someone with actual knowledge of both the domain/environment (including regulations) and its implications. You could keep asking your favorite LLM "what if...?" and maybe it will get that right every time, but someone has to come up with those questions.
  • everlier 6 hours ago
    Was it cooked when they invented high-level languages and presumably every clerk got an ability to write Software? Surely it eliminated all need in people specialising in writing it, it became so simple.

    There's always a market for things that people are too lazy to do on their own.

    • fnoef 6 hours ago
      Never in my career I have seen coding becoming so accessible. You needed to know the language in order to write code, now you just need to know English.
  • gniv 6 hours ago
    I was reminded of "software is eating the world", which was basically the idea that everything needs software. It's still true. In that light, programming is not cooked. We can do things much faster, yes, but there are so many things to do that it will still take a ton of (let's call them) qualified people to build them all.
  • sigbottle 3 hours ago
    There's two things to separate here.

    One is the practical and societal consequences, iteratively, over the next few decades. Fine, this is important discussion. If this is what you're discussing, I have no worries - automation taking a significant portion of jobs, including software engineering, is a huge worry.

    The other thing is this almost schadenfreude of intelligence. The argument goes something like, if AGI is a superset of all our intellectual, physical, and mental capabilities, what point is there of humans? Not from an economic perspective, but literally, a "why do humans exist" perspective? It would be "rational" to defer all of your thinking to a hyperintelligent AGI. Obviously.

    The latter sentiment I see a decent bit on hackernews. You see it encoded in psychoanalytic comments like, "Humans have had the special privilege of being intelligent for so long, that they can't fathom that something else is more intelligent than them."

    For me, the only actionable conclusion I can see from a philosophy like this is to Lie Down and Rot. You are not allowed to use your thinking, because a rational superagent has simply thought about it more objectively and harder than you.

    I don't know. That kind of thinking, be it from intuitively when I was in my teens, to learning about government and ethics (Rational Utopianism, etc.) has always ticked me off. Incidentally, every single person who's thought that way unequivocally, I've disliked.

    Of course, if you phrase it like this, you'll get called irrational and quickly get compared to not so nice things. I don't care. Compare me all you want to unsavory figures, this kind of psychoanalytic gaslighting statement is never conducive to "good human living".

    Don't care if the rebuttal analogy is "well, you're a toddler throwing a tantrum, while the AGI simply moves on". You can't let ideologies like the second get to you.

  • AlexeyBrin 6 hours ago
    The honest answer is that nobody really knows.
    • aristofun 6 hours ago
      The real answer that everybody can see that didn’t happen and can’t happen with current llms
  • dubesar 4 hours ago
    Not really. Complex stuffs still require human in the loop. Also yes the number of folks required in the industry will go down for sure. One thing to catch is that folks who are too much dependent on AI might face consequences in things failing in production (happened to myself).