How would you assemble an ethics board for an AI project?
If you wanted to assemble an ethics board for an AI project, how would you do it?
Depends on the project. For most projects I would personally seek out the pragmatists in the field, as well as people with a healthy skepticism of AI as a whole. Too many AI ethics boards today whinge and fold their hands over hypothetical scenarios that aren't researched and don't pose a demonstrable harm. I would much rather identify problems that are a poor fit for AI in the first place (eg. risking human lives in a self-driving car) and avoid those, rather than pay a bunch of yes-men to feed me lines about safety metrics and diversity targets.