What do you think?
There's a lot more to a programming job than just code.
During this demo, a man demoed a prototype of the modern computer, complete with a mouse, keyboard, and programs that draw drawing and edit text.
The kind of software engineer who just churns existing code for no reason might be replaced. Or exposed. Or perhaps he'll use GPT4 to produce even more churn and get promoted.
Who knows. This industry has horrible practices and rewards some of the worst people.
ML/LLM does not fundamentally understand the code. It has to be trained based upon contextual information as to what the code is doing. There are already researchers experimenting with "poisoning" ML models.
For the industry, it just means that a lot of people who _cannot_ code (IE, they haven't the skill, training, or experience to do it) will "start coding" and launch products. Then, when it breaks they'll have to rely upon someone who actually understands why the edge-case bug happened.
ML/LLM are awesome foot-cannons. They have their place, but like a firearm — their 90% use-case will be to cause harm to other individuals (directly, or indirectly).
Not a dev but I assume there is a hard quality barrier - ie does the code work. I'm in digital marketing/seo, and there is a much lower barrier for 'good enough' content. Couple this with a much higher drive for quantiy of content - across web content, social content, and email content. If you can produce enough cheaply it works - look at spam emails as an example.
I am curious if this will lead to a circular logic style training situation for AI in the future - where it begins to train itself on content previous versions have published. Or copyright craziness where it is filtering AI content from it's training materials.
Image generators seem to be on a downward slope in the hype curve. You see their creations in low-budget blogs or game prototypes. But the ones that were generated quickly stick out, and the ones that were generated with careful repetition are expensive unless you and your GPUs' time is worthless.
LLMs could change civilization for the better or worse in any number of ways. They could also turn out to be the next big flop or just another tool if they persistently hover around the "80-90% good enough" threshold.
Personally, I still wouldn't bet a dollar on the question. Computers are interesting and useful, so I'll keep working with them and let the chips fall where they may.
I'm sure as heck not going to rush into spending $200k on a graduate degree in something AI...
I've just been talking with a friend yesterday that this tech is coming sooner and later, in a few years at most ... didn't think it was coming in less than 24hrs from then.
It is going to change all knowledge work a lot, but this is still a demo. It will take time for best uses and integrations to get built.
Assuming OpenAI and Microsoft don't screw it up (and/or a viable open source/competition emerges).
Exciting times!
From now on, it is going to be less engineers and seniors are a cost center waiting to be reduced from a team of 5 to 1 or 2.
It also makes me feel anxious about where this is all going. I'm having questions about whatever I thought was unique about me, or about humans over computers. And like all of us I have a lot of identity and sense of self-worth tied up in my relationship with computers. All of that feels mixed up and confused now. My confidence in how the future will unfold is very low. (I'm also unemployed, which in some ways feels auspicious at this moment, but also creates a high base-level anxiety.)
So far I feel like GPT rewards expertise. That is, your confidence and both breadth and depth of knowledge are accentuated by GPT, not diminished. But will that stay true? Things I think I understand keep changing every couple months. And what will these changes mean to us collectively? I really don't know, and that's uncomfortable.