In the meanwhile the most important people in tech including Nvidia CEO, Sam, Hinton, Musk and many others believe that AI will very soon be able to do everything a coder can do today. Why does it matter if it's in 2 years of 5 years? It's much earlier than your retirement date. Nobody is planning for this.
I believe this is a case of "normalcy bias" where crowds refuse to see reality because it's too disturbing.
They all have a serious financial interest in that being the case, so that's entirely unsurprising and not terribly persuasive.
> I believe this is a case of "normalcy bias" where crowds refuse to see reality because it's too disturbing.
I believe that nobody can actually know what all this will be right now, so what you're seeing isn't one group denying reality and another group accepting it. You're seeing two groups speculating about the future and expressing different opinions.
One, the FAANGs have been captured by MBA types, not computer scientists, so they do not have the background to carefully gauge what is and is not technically possible. Given that the C Suite is invested, are you as a middle manager going to speak up. When you have people, even Musk, claiming that X will be possible in Y years, I discount it. Even Hinton isn't close to the tools here.
Two, there are areas that they seem to have genuine use. Boilerplate email generators, musicians, graphic designers, and visual effects people should watch their back. Professors who merely add problems from another textbook than the assigned one are likely also in trouble. Maybe things like logic programming or unit tests, not sure, but those seem harder to mess up.
Three, we are seeing what happens when a statistical engine, and not a logic engine run amok. If you add non relevant information or change the order of elements, AFAIK, these tools cannot incorporate the change in information. So I think that their usefulness as a teaching tool is also overstated. If we ever get an engine that can explain it's choices, well, that is also a difference.
Lastly, tech is really looking for a genuinely transformational technology. They arguably haven't had a real hit since cloud computing. Maybe since the iPhone. Their last few attempts have run into the difficulty that the universe may be harder to model inside a silicon box than is worth it (self driving cars, cryptocurrency, video game streaming), and if they have to go from never ending growth companies to large S&P 500 companies that have to compete... well... things will be different. Especially compensation for medium talent software engineers.
However: Deepmind seems like they are 50 years in the future for all of this, so if someone there says I'm wrong about any of this, listen to them and not me.
"AI can do X" for all X is an unrealistic hope.
What's absent is anything remotely like inductive reasoning and thought.
The others I cannot speak to. Many of them are charletans and exploit what they do not actually understand for the value of speculation.
So, two people with a major vested interest, someone who has a past record of making ludicrously over-optimistic claims about this stuff (yes, Geoffrey, we still need radiologists), and, well, do I really need to address Musk?
I mean, if you’re going to do an argument to authority, you can probably do better than this.
People have been claiming that we won’t need programmers anymore any day now for, at this point, about 65 years (entertainingly, this started with COBOL, the theory being that managers could just use COBOL to tell the computer what to do).
In other words, consider your own authority bias before worrying too much about the normalcy bias of others, especially when there have been so-called AI winters before and may well be again.
https://www.reddit.com/r/cscareerquestions/s/dOLxoZCgjl
Denial
If you’re asking it to do something novel, good luck.
The kinds of things AI can do were already offshored long ago. So nope, not worried.