HACKER Q&A
📣 toxinu

Are there software engineering areas that are safe from LLMs invasion?


Are there any software engineering areas that are safe from companies forcing you to use AI editors to work? Like low-level architectures, electronic, crypto, ai, etc.

Maybe other related or not so far areas like SRE. How is SRE these days? Can you still work the way you want to work? Are you being forced to switch as well?


  👤 al_borland Accepted Answer ✓
From what I’ve seen, LLMs are good at making stuff that has already been made and posted to GitHub a thousand times before. At my job we’re constantly asked to do things that really haven’t been done before, at least not by people sharing source code, so the LLMs suck at most of it.

LLMs make for great tech demos, but when it comes to writing code for production that actually does something new and useful, it hasn’t impressed me at all.


👤 fiftyacorn
Legacy systems - there are legacy systems that are like house of cards and you have to move forward very carefully. These areas might have code/languages that are older and the LLM wont have as big a model to learn from

Businesses often rely on these systems - and they rely on the processes to protect them so are reluctant to adopt AI


👤 austin-cheney
* Consulting. Businesses are fond of repeating mistakes with great dedication that sometimes it takes some outside help to steer the ship right to great animosity from the people writing code.

* Accessibility. Accessibility isn’t a huge challenge unless you’re in a business with a pattern of largely ignoring it. Then it can be a huge challenge to fix. AI won’t be enough and it nightly likely require outside help.

* Speed. If you want faster executing software you need to measure things. AI will be learning from existing code that likely wasn’t well measured.