The thing is, as far as I can tell, most cases of generative models reproducing something 1:1 was something which was very likely to occur many times in the dataset, like stock images or passages from well-known newspapers.
Technically - the image generation models are already trained as denoisers and so you can't effectively thwart them without compromising perceptual quality of protected media. "Protection" presented so far relied on some quirks of particular architectures which aren't bound to occur in newer releases.
I find it interesting that seemingly the same people willing to embrace "When buying isn't owning, copying isn't stealing" approaches to Intellectual Property are worried about protecting their IP from AI.
Right now "AI" is pretty much "Big Tech / Big Media" bad guys, sure. Anyone suspect that's the way it stays? "Model training is too expensive" so no one will have home built or shared, sharded, federated public model building?
Certainly not, if there's no legal way to train without massive involvement in the universe of IP law.
Will "publication" come to mean more than "performance" and include some compulsory licensing scheme? Will we need to pin flags to our lapels so that everyone can know the legal status of our every utterance?