Those other topics are so depressing.
The consumer cannot be trusted, so the producers’ bear the legal (and social) obligations.
Another reason is purely financial: it now costs so much to compete, that some late entrants want safety rails to level the playing field. Without being pejorative things like IPR going into the training set, and access to the models which become enshrined in law or governance seem sensible.
In robotics, there's been a concern about the inevitable deployment in warfare, for decades. We have already seen existing non-autonomous devices move up from a human deciding to pull the trigger, to semi or fully autonomous systems which pre-line up the target and THEN ask for permit to pull the trigger which leads naturally to the questions about how the human-in-the-loop can know the target acquisition is appropriate. Huge assumptions about "this is true and correct and timely" in this.
Lastly, the bogus results. The AI which are producing plausible but entirely fictitious outcomes. Which then become used in legal cite, or in medical decision making, in assessment, ill founded beliefs in AI systems producing outputs which reflect reality.
As a consumer of IT systems, I care about this stuff. It would worry me if the AI experts DID NOT want to see some .. boundaries on deployment.
Is this fear of skynet? No. nothing has to come to fruition in AGI for these risks to exist.