(2) OpenAI and its principals (particular Altman) deliberately and excessively works to promote AI-safety-related fears, but OpenAI is increasingly opaque about what it is doing about AI-safety (notionally, ironically, as a safety measure) – angers both everyone who thinks that the specific AI-safety concerns OpenAI is pushing are bunk and detrimental to useful progress and/or distracting from the real and significant concerns that should be addressed with AI and everyone who has AI-safety concerns (whether the same as OpenAI’s or different) and sees transparency and verifiability as important to confidence that those are being addressed (these are different, but overlapping, groups; it particularly angers people in the overlap, like the people centered on “AI ethics”.)
(3) Generalized anger at the actual and perceived near-future likely disruptions of AI, with OpenAI as a highly visible figure profiting from those disruptions.