For image based networks, it's pretty common to increase the amount of training data by cropping, rotating, and adding noise to pictures and feeding them in multiple times. The larger the network, the less useful this is. It will quickly start to overfit and memorize the content of inputs that it sees multiple times. And I'm not aware of anyone doing something like that for large language models.