One one side, most AI LLMs seem superhuman on easy entry level programming tasks, but increasingly falter as the complexity increases. This seems to indicate those best shielded against AI will be the very niche specialists capable of solving very hard, specific problems
On the other hand, AI tends to perform better when tasked to solve more specific problems, and more general, abstract problems that humans are better at tend to be harder to crack. This seems to indicate a generalist approach will offer humans a greater competitive advantage.
Which makes more sense? And of course, if the advent of AGI were to occur in that timeframe, either aim would be irrelevant.
The Mediocre Human Glue: generalists in the age of LLMs https://indiscipline.github.io/post/the-mediocre-human-glue/
As future LLMs train on piles of content generated by previous iterations their abilities will revert to the mean, making them no more competent than a mediocre human.
Instead of worrying about what to “be” or what might happen with AI in the future, concentrate on developing valuable skills you can put to use now. Experience and domain knowledge will continue to count for something.
Both make sense. Which path do you want to pursue?