There are fewer people working on ai safety on things that are currently happening, small now problems.
It seems hard to get good at many things without doing them in practice, no matter how much theory you know. Maybe AI safety is one of those things.
Also, path dependence seems like a big thing - the better we can make the present, the better the future is likely to be.
Overall, seems hard to get good at things without contact with reality.
Someone could make an organization that does the boring one off work to help anyone that's suffered harm as a result of AI (and probably any software, because AI will only amplify the results of software):
* Deepfake assistance - helping people get their deepfakes off the web
* Hack recovery - helping people and businesses recover from cyber attacks
* Job retraining - helping anyone who was fired as a result of automation
Hopefully the org would be funded by some of the big AI players and would be a non-profit, and all of the recovery work would be published openly eg as videos where possible, so that it's somewhat scalable. And then eventually you add in automation so you have software that helps all of these people more automatically.
Side note: When society faces challenges that are in our zone of frustration, we get better. When we face challenges that overwhelm us, then we can get ruined. Many small challenges = good. Few big challenges = bad. So for people working on foundation models, consider finding ways to do more frequent but smaller releases, so that people and society can adapt in smaller batches, rather than having to do fewer bigger adaptations.
Tell me what you like and tell me what you think isn't going to work (and ideally, tell me an idea for solving that challenge, too).
as jstx said[0], this isn't exactly "new"
----------
We generally call this spreading FUD.
Misinformation has been a thing long before the AI (really, ML) took off in the last few years.
We struggle with identity, trust and verifiability on the internet. ML might help bad actors, but it might help detecting fakes just as much.