Ask HN: Are there software engineering areas that are safe from LLMs invasion?

10 points by toxinu 2 days ago

Are there any software engineering areas that are safe from companies forcing you to use AI editors to work? Like low-level architectures, electronic, crypto, ai, etc.

Maybe other related or not so far areas like SRE. How is SRE these days? Can you still work the way you want to work? Are you being forced to switch as well?

al_borland a day ago

From what I’ve seen, LLMs are good at making stuff that has already been made and posted to GitHub a thousand times before. At my job we’re constantly asked to do things that really haven’t been done before, at least not by people sharing source code, so the LLMs suck at most of it.

LLMs make for great tech demos, but when it comes to writing code for production that actually does something new and useful, it hasn’t impressed me at all.

  • chistev a day ago

    Ain't nothing new under the sun.

    • al_borland a day ago

      That’s what I added the caveat about it being open sourced. I’m sure what I’m doing has been solved many times by many companies, but it’s not information they share publicly, just like my code isn’t shared publicly. There is also the issue of integrating with existing legacy systems, which may include bespoke internal tools.

      Maybe the thing has been done in general, but not in a way that’s useful for me. That’s why it looks good in tech demos. If I ask AI to write what I need, it will give me an answer, but it won’t actually work and integrate in the ways I need for production. The last time I tried it gave me 70 lines of code, the real end result was thousands. The AI version would look cool in a demo though.

austin-cheney a day ago

* Consulting. Businesses are fond of repeating mistakes with great dedication that sometimes it takes some outside help to steer the ship right to great animosity from the people writing code.

* Accessibility. Accessibility isn’t a huge challenge unless you’re in a business with a pattern of largely ignoring it. Then it can be a huge challenge to fix. AI won’t be enough and it nightly likely require outside help.

* Speed. If you want faster executing software you need to measure things. AI will be learning from existing code that likely wasn’t well measured.

gkoos a day ago

There's quite a few, although LLM's are slowly creeping in: 1. everything with less data to train on: - Compiler / language toolchain development. - Specialized embedded robotics (industrial robotics, drones). - Scientific / high-performance computing

2. Low tolerance for LLM-induced errors: - Network protocols / telecom software - Medical software - Aerospace, automotive

3. Performance-critical code: - Game engine / graphics engine development (probably an area where we'll see them soon) - Kernels, drivers, microcontrollers.

etc. Not all is lost yet.

cognix_dev a day ago

Many excellent LLM are being created. I feel that this era is similar to the emerging automotive industry era. In other words, we are currently in an era of engine performance competition, competing for power and speed. However, I believe that this era will eventually transition to the next phase.

  • flanbybleue69 21 hours ago

    I am good with the current power and speed. Let's straight jump to the smart era.

    Also, my main issue is not really AI not being good enough. If a company is fine getting sh*t code then let's go full AI, but I love my job, I love solving issues, coding, working with new paradigm, trying solutions, failing, improving, etc. I don't want to be a prompt expert and being asked to review AI generated code all day long.

    Of course, it is a very personal opinion, but I think it is still shared by a decent bunch of people.

muzani a day ago

Just get really good at something, in the top 10% where you would be writing books and disagreeing with reddit.

AI is predictive. Most people will fall to a comfort zone where AI tells them what to do. But you should become an expert and be one of the few who are telling it what to do.

  • flanbybleue69 21 hours ago

    Managers and CTO don't care about you being an expert. They just push you to use what they saw 100 times on LinkedIn, using Cursor to improve 60% of code delivery time.

    Every month CTO meeting is about them pushing software engineer to use Cursor.

sdotdev 21 hours ago

Embedded systems in infrastructure systems should be save as they not only need to be specific but are just important and dangerous but you never know.

  • gaws 2 hours ago

    Give LLMs five to ten more years, and they'll dominate embedded systems and other low-level programming.

fiftyacorn a day ago

Legacy systems - there are legacy systems that are like house of cards and you have to move forward very carefully. These areas might have code/languages that are older and the LLM wont have as big a model to learn from

Businesses often rely on these systems - and they rely on the processes to protect them so are reluctant to adopt AI

drrob a day ago

Defence. We don't use any LLMs, and couldn't even if we wanted to.

To be fair the code they produce is dogshit, so it isn't a problem.

  • flanbybleue69 21 hours ago

    That might be a good candidate, right.

    I am baffled about how each company are jumping into LLMs without considering anything about their own privacy when 10 years ago, just using GitHub with a private repository could have been an issue.

    > To be fair the code they produce is dogshit, so it isn't a problem.

    That's not a problem for managers and CTO that are just being brainwashed by marketing and LinkedIn posts that all their engineers should use Cursor.

    • drrob 8 hours ago

      True. There's a bubble that will burst with LLM stuff, I am sure of it.

absaroui a day ago

Fintech, banking..

  • muzani a day ago

    Plenty of LLMs here. Probably more than others and Stripe is poster boy for OpenAI.

    Fintech has a ton of regulations. Everything layered over and over with tests. There's a form of extreme engineering where fintech runs tests in production, meaning that the systems in place are robust enough to handle bad code and junk data.