HACKER Q&A
📣 rethoughter

Will reliance on GPT result in a local maximum of human innovation?


Given that GPT is trained on a snapshot of human knowledge, and that humans increasingly use it to replace knowledge work, does that mean the velocity of new knowledge added to GPT will approach zero? I always felt that GPT tends to err towards current best practices, and struggles when explictly asked to innovate. Imagine if it existed in the 90's and we all wrote elaborate OOP hierarchies and CORBA services because that is all the programming knowledge GPT was trained on. Of course, probably >90% of work contains no new ideas, but even so many people come up with clever little hacks that can spread to others surprisingly well.


  👤 chatmasta Accepted Answer ✓
Did search engines result in a local maximum of human ability to organize information? LLMs are the next logical step, except instead of organizing information, they synthesize it. I expect this will lead to increased creative output from humans who will be unburdened of the task of synthesizing information, in the same way search engines unburdened us from the need to organize, recall and locate information.

It may make some people "dumber" in the same way that Google has eliminated the need to commit information to long-term memory. And yet you still remember things, or at least, you remember how to look them up (personally, I often find myself remembering what to search for to find an exact answer, rather than the answer itself). Search engines gave our memory space to remember more abstract, higher level concepts. Similarly, LLMs will free our brain to synthesize higher levels of information.


👤 mikewarot
Innovation <> Knowledge <> Wisdom

Innovation is creating combinations of existing knowledge and finding new knowledge in the process. The role of GPT will be to reduce the cost of exploring the space of existing knowledge, and helping to evaluate these combinations.

Each time you try some new combination, it's like buying a lottery ticket. The more tickets you have, the better your cumulative odds of "winning".

I think innovation is clearly going to keep on going. Even if Moore's law tops out and we can't get new chips in 5 years due to some calamity, we'll just keep optimizing and making things better with the tools and techniques we've already got.


👤 AnimalMuppet
I worry about GPT's training set becoming diluted. It's trained with a snapshot of human... not knowledge, exactly. Human text, which contains much knowledge but also some mistakes, some propaganda, and some lies.

Then GPT produces a huge amount of new output, which reproduces human knowledge, mistakes, propaganda, and lies, and adds new mistakes of its own. And from that point on, you have a choice to make. Do you use GPT output to train GPTs, nor not?

If yes, you risk a GPT echo chamber, where GPTs are increasingly trained on GPT output, and human-written text becomes a smaller and smaller fraction of the input. (And some of the GPT input becomes second- or third- generation GPT output.) I have a hard time seeing that increasing the correctness of the output.

If you exclude GPT output from the training data, though, how do you make that work? Do you freeze the training data to what existed in 2020, say? That cuts GPT off from all new knowledge. But if you don't do that, you have to be able to reliably filter out GPT-generated text. How are you going to do that?

It's a problem. I don't see a clear solution.


👤 markus_zhang
It might. What I can forsee is that children might be attracted to its immediate usage and simply stopped taking the effort to think through for homeworks and such. And this may decrease the average intellectual abilities of the whole generation.

Potential counter argument 1: We have had calculators, we have had computers, intellectual ability does not seem to decline. Why pick AI?

Answer to above counter argument: AI is way more than calculators and previous computer automations. I can see that maybe just in a few years GPT can do homeworks and assignments of every kind in a split of second, if without restrictions (for example it definitely cannot do chem labs). It's more powerful than its grandfathers. In additional, we cannot ignore the potential unlimited intellectual products AI can produce, such as anime and novels, whatever the quality, their sheer number poses an issue. Think about TikTok and other similar stuffs, but empowered by AI.

Potential counter argument 2: human society only relies on a few exceptional individuals, most people don't contribute to advance of science and technology.

Answer to above: first, exceptional people die, so we need replacements. We don't know how to grow them, that is, exceptional individuals may come from all backgrounds so we need the diversity. If one generation spends a lot less time in intellectual activities, it will impact the next generation. Second, we still need average people to contribute to science and tech because exceptional people can't do everything.


👤 sudo_navendu
I think humans will be able to innovate much faster by building on top of existing knowledge that tools like GPT give access to. It might be different from what we have experienced so far. But there is very less chance that humans stop innovating.

👤 satvikpendem
Why can't it be that we use it for boilerplate knowledge and that we will continue to innovate on our own? Or how about having it recombine knowledge itself based on the vast corpus of data it has already consumed?