It may make some people "dumber" in the same way that Google has eliminated the need to commit information to long-term memory. And yet you still remember things, or at least, you remember how to look them up (personally, I often find myself remembering what to search for to find an exact answer, rather than the answer itself). Search engines gave our memory space to remember more abstract, higher level concepts. Similarly, LLMs will free our brain to synthesize higher levels of information.
Innovation is creating combinations of existing knowledge and finding new knowledge in the process. The role of GPT will be to reduce the cost of exploring the space of existing knowledge, and helping to evaluate these combinations.
Each time you try some new combination, it's like buying a lottery ticket. The more tickets you have, the better your cumulative odds of "winning".
I think innovation is clearly going to keep on going. Even if Moore's law tops out and we can't get new chips in 5 years due to some calamity, we'll just keep optimizing and making things better with the tools and techniques we've already got.
Then GPT produces a huge amount of new output, which reproduces human knowledge, mistakes, propaganda, and lies, and adds new mistakes of its own. And from that point on, you have a choice to make. Do you use GPT output to train GPTs, nor not?
If yes, you risk a GPT echo chamber, where GPTs are increasingly trained on GPT output, and human-written text becomes a smaller and smaller fraction of the input. (And some of the GPT input becomes second- or third- generation GPT output.) I have a hard time seeing that increasing the correctness of the output.
If you exclude GPT output from the training data, though, how do you make that work? Do you freeze the training data to what existed in 2020, say? That cuts GPT off from all new knowledge. But if you don't do that, you have to be able to reliably filter out GPT-generated text. How are you going to do that?
It's a problem. I don't see a clear solution.
Potential counter argument 1: We have had calculators, we have had computers, intellectual ability does not seem to decline. Why pick AI?
Answer to above counter argument: AI is way more than calculators and previous computer automations. I can see that maybe just in a few years GPT can do homeworks and assignments of every kind in a split of second, if without restrictions (for example it definitely cannot do chem labs). It's more powerful than its grandfathers. In additional, we cannot ignore the potential unlimited intellectual products AI can produce, such as anime and novels, whatever the quality, their sheer number poses an issue. Think about TikTok and other similar stuffs, but empowered by AI.
Potential counter argument 2: human society only relies on a few exceptional individuals, most people don't contribute to advance of science and technology.
Answer to above: first, exceptional people die, so we need replacements. We don't know how to grow them, that is, exceptional individuals may come from all backgrounds so we need the diversity. If one generation spends a lot less time in intellectual activities, it will impact the next generation. Second, we still need average people to contribute to science and tech because exceptional people can't do everything.