HACKER Q&A
📣 amichail

Does ChatGPT's success reflect poorly on humanity?


If a statistical knowledge-based system can do this well on so many topics, then maybe much of what humanity works on and thinks about doesn't require much intelligence?


  👤 RicoElectrico Accepted Answer ✓
If anything, I am surprised it improved on GPT-3 and Codex (Copilot) so quickly. That it achieved what it did - not really. After all, 50% of people have IQ below 100. And professionals in many fields just get by with pattern matching. It's easy to perplex an average academic teacher or a doctor with an atypical question/situation, even if it follows from the first principles.

👤 inphovore
Or this is the distillation and resynthesis of available humanity.

These aren’t statistical word salad, they are statistics in inter-relational contexts.

This mulching of statistical knowledge with the resources of the entire learning set is an impressive tool.

We should not take it too seriously in this way.

The way it should be taken seriously is through directed work (fix my code, manage my digital resources, perform remote work.)

Good luck!


👤 PaulHoule
It reflects negatively on humanity that some people read ChatGPT's wrong answers and can't seem to tell they are wrong.

I see it as A.I. safety problem in that its hypnotic abilities could be used to:

(1) promote bogus crypto tokens,

(2) scam people on dating sites,

(3) outperform anybody who got bullied in school on the Turing test.

(4) write replies to Elon Musk’s tweets that really make him squee.