Is that possible technically?
Assuming one of the big issues for training a LLM is the computing cost, could this be an approach to avoid that advanced LLMs are only created by the biggest companies, encouraging even more capital concentration and leaving high-end IAs in their sole control?
Another problem is accessing the immense amount of data required to train the model, but building such datasets openly seems more feasible, if not already made available by some actors.
But the recent 'leak' from google indicated that lora (low rank update) of model weights is enabling a de facto distributed effort. Individuals can do cheap low-rank updates fine-tuning models, and combine their efforts additively, effectively performing full-rank updates. So maybe it will be possible!