Google, OpenAI, Anthropic, Mistral, Cohere and so many companies are building their own LLMs that have started to challenge GTP-4 and I think this will continue even after GPT-5.
Also with the huge cost of data collection, model training, compute cost and everything included I feel it's a bad deal to build LLMs for the smaller startups. Rather build a platform to leverage AI to deliver value to users or businesses.
What is possible is fine tuning an existing LLM.
I suspect that this will become popular. One can imagine open source platforms with built-in plugin registries for easily installable agents. Each agent may come with it's own knowledgebase and vectors or keyword for selecting them. It may be a quantized fine tuned LoRA that can be loaded on the fly.
By focusing on "today" problems they risk being leapfrogged by the next thing that solves more known and unknown problems and grants new abilities. Could spell instant death. The AI-birthers are gonna chase the fine-tune dollar anyway and maybe we won't even see any more jaw-dropping improvements from here on out but we will see what happens.
For the record, enshittification should not be a thing for LLMs though...