HACKER Q&A
📣 arisAlexis

Why are so many people in tech in denial about AI?


I have regular conversations with fellow developers and their mindset goes like: today X can't be done so tomorrow AI will also not be able to do X for Y biased reason.

In the meanwhile the most important people in tech including Nvidia CEO, Sam, Hinton, Musk and many others believe that AI will very soon be able to do everything a coder can do today. Why does it matter if it's in 2 years of 5 years? It's much earlier than your retirement date. Nobody is planning for this.

I believe this is a case of "normalcy bias" where crowds refuse to see reality because it's too disturbing.


  👤 JohnFen Accepted Answer ✓
> the most important people in tech including Nvidia CEO, Sam, Hinton, Musk and many others believe that AI will very soon be able to do everything a coder can do today.

They all have a serious financial interest in that being the case, so that's entirely unsurprising and not terribly persuasive.

> I believe this is a case of "normalcy bias" where crowds refuse to see reality because it's too disturbing.

I believe that nobody can actually know what all this will be right now, so what you're seeing isn't one group denying reality and another group accepting it. You're seeing two groups speculating about the future and expressing different opinions.


👤 willhslade
From my perspective, there are few things that are hitting at the same time.

One, the FAANGs have been captured by MBA types, not computer scientists, so they do not have the background to carefully gauge what is and is not technically possible. Given that the C Suite is invested, are you as a middle manager going to speak up. When you have people, even Musk, claiming that X will be possible in Y years, I discount it. Even Hinton isn't close to the tools here.

Two, there are areas that they seem to have genuine use. Boilerplate email generators, musicians, graphic designers, and visual effects people should watch their back. Professors who merely add problems from another textbook than the assigned one are likely also in trouble. Maybe things like logic programming or unit tests, not sure, but those seem harder to mess up.

Three, we are seeing what happens when a statistical engine, and not a logic engine run amok. If you add non relevant information or change the order of elements, AFAIK, these tools cannot incorporate the change in information. So I think that their usefulness as a teaching tool is also overstated. If we ever get an engine that can explain it's choices, well, that is also a difference.

Lastly, tech is really looking for a genuinely transformational technology. They arguably haven't had a real hit since cloud computing. Maybe since the iPhone. Their last few attempts have run into the difficulty that the universe may be harder to model inside a silicon box than is worth it (self driving cars, cryptocurrency, video game streaming), and if they have to go from never ending growth companies to large S&P 500 companies that have to compete... well... things will be different. Especially compensation for medium talent software engineers.

However: Deepmind seems like they are 50 years in the future for all of this, so if someone there says I'm wrong about any of this, listen to them and not me.


👤 DamonHD
Some of us have loooooong experience with AI (I've shipped product with AI features, and also have a 30Y-old AI degree) but also of marketing hype and outright tech-based fraud (eg NFTs). There is some useful stuff going on, but outrageous self-serving wishful thinking by people with a track record of difficulty with reality makes it very hard to believe any of their promises and predictions.

👤 h2odragon
Because we've seen the hype cycle run before, With AI, with VR, where unrealistic promises are made and widely believed for no real reason, and no one cares when they're never realized.

"AI can do X" for all X is an unrealistic hope.


👤 ggm
Hinton chooses his language extremely carefully. He avoids using words which imply more than the present. It's maths. Remarkable maths but still maths.

What's absent is anything remotely like inductive reasoning and thought.

The others I cannot speak to. Many of them are charletans and exploit what they do not actually understand for the value of speculation.


👤 al2o3cr
Counter-question: why are so many people in tech blindly guzzling marketing fluff from professional bullshit-artists who have a financial incentive to lie?

👤 Eumenes
Most people in tech aren't in AI, not even adjacently. They're building/maintaining web apps and legacy systems or corporate IT infrastructure or Salesforce instances or Wordpress sites. These fields lag behind the cool kids like OpenAI/Anthropic. They may be sold some AI solution but not seeing it as a threat as they've been sold off the shelf software for decades. It'll take time.

👤 rsynnott
> In the meanwhile the most important people in tech including Nvidia CEO, Sam, Hinton, Musk

So, two people with a major vested interest, someone who has a past record of making ludicrously over-optimistic claims about this stuff (yes, Geoffrey, we still need radiologists), and, well, do I really need to address Musk?

I mean, if you’re going to do an argument to authority, you can probably do better than this.

People have been claiming that we won’t need programmers anymore any day now for, at this point, about 65 years (entertainingly, this started with COBOL, the theory being that managers could just use COBOL to tell the computer what to do).


👤 yawpitch
I’m much more disturbed by the amount of faith purported NIs are willing to put in the predictions of other purported NIs about the capabilities of hypothetical so-called AIs when actually extant so-called AIs evince nothing like those capabilities than I am by the growth curve of so-called AIs.

In other words, consider your own authority bias before worrying too much about the normalcy bias of others, especially when there have been so-called AI winters before and may well be again.


👤 arisAlexis
Hilarious posterior after my post got flagged

https://www.reddit.com/r/cscareerquestions/s/dOLxoZCgjl

Denial


👤 QuiEgo
AI has to be trained - if you’re asking it to grind leetcode or take a standardized test or make the millionth version of a shop webpage, it will rock it.

If you’re asking it to do something novel, good luck.

The kinds of things AI can do were already offshored long ago. So nope, not worried.


👤 solardev
"It is difficult to get a man to understand something when his salary depends on his not understanding it." -Upton Sinclair (https://www.oxfordreference.com/display/10.1093/acref/978019...)