HACKER Q&A
📣 quintenmoser

To what extent is “ethical AI” virtue signaling?


There is a growing movement towards "ethical" or "human" centered AI and technology. The movement has seen more leverage in 2020 due to things such as the Social Dilemma on Netflix, and the incident(s) with Timnit Gebru.

Will the bad decisions that these systems make ever be repaired with things that don't fall under the "inspire action" and "informing" people umbrella? Is responsible technology policy possible? Is this a social movement, or is there a technical/business purpose to really push for this? Or, is this mostly virtue signaling?


  👤 iujjkfjdkkdkf Accepted Answer ✓
There is definitely some validity, even if it just boils down to "understand how your ML models make there decisions and make sure you're cool with that". I think the challenge is that a lot of the "ethical ai" agenda seems to be getting set by people that understand neither ML not ethics and end up focusing on superficial issues that are not actually related to ML and are more about "now that you've written your decision criteria down, you may not like what you find". Maybe that's what you're seeing, that what could be more of an objective science gets wrapped in a social movement so it ends up looking a lot less legitimate

👤 cryptResonator
The ethics is that same as it has always been. It is decisions that the people make regarding how to use or deploy the technology. This has zero to do with AI or ML. There is no bias in the math or programming languages used to do ML. Sometimes there is an attempt to say that "AI" is biased because of the "colonialization of math", or some race/tribal based arguments. I think this is very treacherous because by conflating social problems with math/engineering problems neither will be handled effectively. On the other hand, it is certainly possible to use math and/or ML technology to study and try to improve ethical behavior.