If a company invented a self driving car that kills 1,000 people a year, it will never gets allowed on the street. Even 100 a year seems high. But it will actually save tens of thousands.
Why are we a lot stricter on self-driving car technology than on humans? Why can't we simply choose the option that saves more life?
When someone does harm with his/her car, holds the civil (and sometimes penal) responsability of his own acts and usually cannot do harm more than once in a very small timeframe.
In the case of autonomous IA, the company making/coding the vehicle software will be liable, and the problem is very likely to show again in a short time period.
That makes this kind of companies technically on the verge of bankrupcy because they are a good target for class-action lawsuits.
Part of the reason is probably because we're still given captchas asking us to identify lights, buses and zebra crossings.
It could also be that it's killing people from a specific bug. A car might be blind to say, someone wearing a black and white shirt, or maybe a green car and 95% ory deaths could be from that.