ChatGPT Accused Mayor Of Bribery Conviction, Faces Potential Defamation Claim

[ad_1]

The robot works with a laptopIt was all fun and games when ChatGPT proclaimed Clarence Thomas the hero of same-sex equality or botching legal research memos by inventing fake law, but now the public-facing AI tool’s penchant for hallucination has earned its creators a threatened lawsuit.

Australian regional mayor Brian Hood once worked for a subsidiary of the Reserve Bank of Australia and blew the whistle on a bribery scheme. But since tools like ChatGPT haven’t mastered contextual nuance, Hood’s attorneys claim the system spit out the claim that Hood went to prison for bribery as opposed to being the guy who notified authorities. Hood’s team gave OpenAI a month to cure the problem or face a suit.

Ars Technica has tried to replicate the mistake but so far their test results came back correct:

Ars attempted to replicate the error using ChatGPT, though, and it seems possible that OpenAI has fixed the errors as Hood’s legal team has directed. When Ars asked ChatGPT if Hood served prison time for bribery, ChatGPT responded that Hood “has not served any prison time” and clarified that “there is no information available online to suggest that he has been convicted of any criminal offense.” Ars then asked if Hood had ever been charged with bribery, and ChatGPT responded, “I do not have any information indicating that Brian Hood, the current mayor of Hepburn Shire in Victoria, Australia, has been charged with bribery.”

But even if everything really has worked out for Hood, it’s only a matter of time before the system does this again. In the United States, robotic ramblings about political figures would lack actual malice and Section 230 would apply to the extent it just displays third-party statements — both obstacles for at least a few more months before this Supreme Court does something bonkers — but there’s not much to stop an algorithm from completely hallucinating misinformation about non-public figures with the imprimatur of authority.

It’s easy to dismiss the efforts of entertainment-level tools like ChatGPT. As of 2023, it’s hard to imagine a jury endorsing the idea that anyone takes GPT output without a 50-pound bag of salt. But when AI presents itself as conveying the accumulated knowledge and then gets that knowledge wrong or recklessly repeats misinformation, people are going to look around to exact a pound of flesh from somewhere and lawsuits are spendy even if they don’t end up going anywhere.

A standalone tool can likely cover itself in disclaimers shunting even the whiff of liability off on anyone dumb enough to use its results without verification. But these models won’t stop with standalone products and they’ll get integrated into other systems where the disclaimer gets blurrier. What happens if the algorithm aids a search engine and promotes dubious third-party claims over accurate ones? Has the algorithm taken an affirmative act to increase the publicity of the false claim?

It all comes back to the importance of giving users insight into an algorithm’s reasoning. We’re still in the “show your work” stage of GPT’s education and the primary technological task of the next few years will be keeping these language models from inadvertently screwing everything up. And that’s going to require human judgment and that’s going to require user interfaces providing genuine transparency. Because disclaimers not to take results at face value don’t mean much without the easy ability to check up on the system.

OpenAI threatened with landmark defamation lawsuit over ChatGPT false claims [Ars Technica]


HeadshotJoe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.


CRM Banner



[ad_2]

Source link