HACKER Q&A
📣 behnamoh

Isn't HN freaking out at Bing's results ironic?


I assumed most people here have an idea about how these language models work, but the intense reaction we have seen on Bing's responses really disappoints me.

I mean, it's just a language model. It's not running code, has no agency, has no idea about what/who it is. It's just probabilities and statistics all the way down. I might chuckle at its replies but that's about it. Does HN really think this is the start of AI taking over?


  👤 jstx1 Accepted Answer ✓
Everything about it seems exaggerated - the initial reaction to ChatGPT, people wondering if it will take their job, people thinking that it can replace search, Microsoft rushing to integrate it, Google reacting to this etc.

I kind of wish Google had the balls to play it cool and not even flinch - do nothing and let Microsoft embarrass themselves by rushing an incomplete product to market.


👤 minimaxir
Even though there has been far more information about ML/AI in recent years, the overall literacy on large language models and their capabilities is still low, even in a more technically-literate forum like Hacker News.

Even Reddit subreddits like /r/MachineLearning and /r/cscareerquestions still get endless questions asking "Is there a point to learning software engineering when AI will just automate it away?"


👤 salawat
It's not that people misunderstand the probabilistic nature of the models. It's that people are rightfully concerned about the impact of the composition of said probabilistic loop with another one when the mechanism of said composition isn't entirely understood. You're trying to argue seesaw mechanics while everyone else is worrying about the outcome of the abstract algebra of the damn thing.

You just don't realize the conversation is happening on a higher level than you're currently thinking. It's alright. Happens to everyone.


👤 californiadreem
> I might chuckle at its replies but that's about it. Does HN really think this is the start of AI taking over?

You are asking HN to update your priors for you regarding LLMs because you sense a disjunct in your figurative training set and in-group norms.

Beyond this, human consciousness, ignoring all the hand-waving dogma of executive function, is indistinguishable from a LLM except that the former is iterative and the other is ergodic. You choose a single utterance for a point of being in time; it can choose every utterance for a single point of being in time. If we could save you as a point in time and reload you infinitely and provide different prompts, we'd be able to say with certitude that you were merely nothing but a reactive process as well--and they might not be wrong.

In short: don't think of a pink elephant. If you just thought of a pink elephant, you just proved that you engage in automatic "cognition" commensurate with any prompt injection attack.


👤 navjack27
Exactly what I've been wondering for over a year now. Every new "AI" thing has programmers believing in magic.

👤 burpsnard
Well search has become a lot worse than the heady days of Lycos and altavista