Genuine question - if traditional software makes mistake, its usually deterministic, debugable, fixable and a blame can be assigned.
What's the deal with these autonomous AI agents? Let's say analysing customes paperwork to schedule some shipments from overseas and it fails to let a shipment in because it misclassified or worse, lets it it but being it on the shores under certain conditions leads to heavy financial penalties?
Who's responsible? The AI prompt automation engineer? Or the underlying platform? Or the company providing model?
If the answer is that each outcome of such model should be double checked by a human while going through all that paperwork than what's the point of having that automation in the first place?
EDIT: typos
That is the underlying issue here. The liability for an erroneous AI output is the same as errors committed by any other agent of the company: the company is on the hook, though it may have recourse against any third party who trained or operates the model. For domains such as law, where the penalty for hallucinations are extremely severe, and, conversely, the penalty for missing things is likely to be losing litigation, this means treating AI like any other LPO: a duly qualified attorney takes responsibility for the work, which means a bunch of associates doing varying levels of review.
Is there still a place for AI in all this? Absolutely. But as long as hallucinations are a basic feature of the architecture, you’re going to spend as much human time on anything of critical importance as you otherwise would, and a lot of deterministic business rules to constrain the results of everything else.
The company that decided to let ML models make critical decisions of that sort. Not whoever built the base model (if it's another party), or the engineers.
E.g. see all-caps section here: https://opensource.org/license/mit
For commercial products read EULA.
If they all claim no responsibility, then it’s on you.
If you're allergic to peanuts, and they put a clear sign that says this contains peanuts, then you are responsible for eating the food with peanuts.
But when these middle man agents try to obscure the source, they leave out the part on unreliability or simply don't read it. In those cases, they're most likely responsible.
In some cases, I see it like informed consent, where the thing could kill you but you were going to die anyway and the odds are better with the dangerous thing.
As a 60 year old administrator whose greatest strength is with Perl, I believe this is my best option for the future.
It won't be much different when a bridge fall. Yet it won't with the power of 800 engineering teams and 0 engineering managging, we'll see bridges that stand until the sun itself implodes.
Meanwhile we need not worry for the natural disasters of the world. After all, a game of discus in the sun can be funner than the clearest, windylessness afternoon; it's balance delecate as fresh nuget.
Takes me back. I sip lemonade. All is ripe.