What I really want to know is, assuming capital / compute is not a constraint, will be continue to see order of magnitude improvements in LLMs, or is there some kind of "technological" limit you think exists?
This isn't necessarily going to limit it though. It's possible there are clever approaches to leverage much more data. This could either be through AI-generated data, other modalities (e.g. video) or another approach altogether.
This is quite a good accessible post on both sides of this discussion: https://www.dwarkeshpatel.com/p/will-scaling-work
Personally I think we've already hit a ceiling.