HACKER Q&A
📣 amichail

Why do many AI experts care so much about AI safety?


Are there hidden agendas or do they really care that much about safety?


  👤 eesmith Accepted Answer ✓
Because it's more fun to talk about the fanciful worlds of AGI, were you can't be proven wrong, than about issues of mass copyright violations, mechanized wholesale discrimination, the horrific consumption of power and water, the underpaid overseas workers getting PTSD to train the AI systems, the production of microtargeted surveillance-based advertising, the easy of AI-based impersonation and artificial influence generation, and the centralization of computing and society into a handful of companies.

Those other topics are so depressing.


👤 overu589
Remember the 14 year old who committed suicide over the chat bot? Or those people who will eat glue on pizza because some LLM told them so? Or those lawyers (and now medical professionals) who use notably incorrect generated content?

The consumer cannot be trusted, so the producers’ bear the legal (and social) obligations.


👤 ggm
I think it will be for different reasons. Some people because their prior lived experience in expert systems (for instance) is massive mis-application of outcomes to societal detriment: non-ML, pre-ML, non-AI and pre-AI based methods which systematically refused women candidates for med school because the model was based on the bias inherent in the pool before design. Automated government debt recovery systems which had bad heuristics and led to suicides by people accused of owing more than their lifetime earnings back to the government. So, there is a sense that AI if left unfettered can do this? That implies a need for safety rails.

Another reason is purely financial: it now costs so much to compete, that some late entrants want safety rails to level the playing field. Without being pejorative things like IPR going into the training set, and access to the models which become enshrined in law or governance seem sensible.

In robotics, there's been a concern about the inevitable deployment in warfare, for decades. We have already seen existing non-autonomous devices move up from a human deciding to pull the trigger, to semi or fully autonomous systems which pre-line up the target and THEN ask for permit to pull the trigger which leads naturally to the questions about how the human-in-the-loop can know the target acquisition is appropriate. Huge assumptions about "this is true and correct and timely" in this.

Lastly, the bogus results. The AI which are producing plausible but entirely fictitious outcomes. Which then become used in legal cite, or in medical decision making, in assessment, ill founded beliefs in AI systems producing outputs which reflect reality.

As a consumer of IT systems, I care about this stuff. It would worry me if the AI experts DID NOT want to see some .. boundaries on deployment.

Is this fear of skynet? No. nothing has to come to fruition in AGI for these risks to exist.