The ai replied to the effect that:
“Yes, so and so had a scandal. They even had complaints filed with the better business bureau and the california bureau of consumer protection”
First part was accurate and sourced. Second part was invented out of whole cloth. The ai admitted as such when I pressed it.
But, isn’t that a false statement of fact and legally actionable? And it is Microsoft saying it.
People thought this about general internet postings then 230 saved the companies. Governments seem a lot less likely to “help” the companies today though.
It’ll likely be decided similarly to copyrights: with messy and impactful trials. If the prompter owns a copyright, then you the searcher may end up responsible for the AIs statements (probs not though).
Microsoft is a huge target and filing a suit while this has a lot of buzz is a big PR opportunity, so those court cases will start coming soon enough.
Clearly, Microsoft felt confident they could navigate whatever legal headaches and must believe that seizing this opportunity for Bing to outmaneuver Google was more valuable than whatever that might cost. What we can be sure they didn’t do is go in blind.
while(1) console.log('abcdefghijklmnopqrstuvwxyz '[Math.floor(Math.random()*27)])
It will create all the same defamatory statements as Bing's algorithm does. Am I liable for those because I gave you this tool?
Companies are never held liable.
Has anybody had success in getting ChatGPT to admit that true things it said was wrong?
Looks like a naive statement to me, not defamatory.
Fuck AI though ask an amish farmer how to update your antiquated jquery to angular and just realize how much happier than you he is.