OpenAI for instance withholds GPT-3 from the public presumably to protect us from something dangerous but really they are protecting their bank account from the costs of running an inefficient model and preventing people from playing with it and discovering it really isn’t that good.
There is that rationalist cult that claims to be atheistic but really worships the eschaton from Iron Sunrise, they talk about A.I. ethics as an issue to boost their own self-importance, they think they are saving the world the same way Scientologists do.
2. only specific types of people will be interested in those positions
3. disagreements are likely to arise given the nature of the work and the problems these companies work on, much more likely compared to other positions
4. once someone is terminated you're likely to hear about it because of the high profile of the company, because it's about AI and ethics, and maybe also because these jobs attract the kind of person who is more likely to go public about their own termination
It's a bunch of selection effects stacked on top of each other.
Failure could be determined easily: When Skynet starts eliminating humanity, you can be pretty sure your ethicists have failed, right? So as long as that is not happening, they're doing a good job?
The second thing was those religious fucks in Game of Thrones [spoiler alert] who start small, get invited to court, gain control, and end up getting blown up.