It Started With Paintings And Funny Little Essays. Now ChatGPT Could Be A Fraud And Civil Rights Risk.

[ad_1]

Ransomware Business Computer Malware Privacy BreachWell, that didn’t take long to go from parlor trick to prison time. A couple of months ago, discourse surrounding AI developments were about a 6 percent mix of “hey, this is theft” and 9 percent “ooh, here’s what I would look like if I were a princess!” I know that there’s a 2 percent gap. You aren’t here for math though. You are here for law. And the FTC is trying to do damage control on the prospect that laws could be broken using rapidly developing and easily accessible AI. From Reuters:

Leaders of the U.S. Federal Trade Commission said on Tuesday the agency would pursue companies who misuse artificial intelligence to violate laws against discrimination or be deceptive.

The sudden popularity of Microsoft-backed OpenAI’s ChatGPT this year has prompted calls for regulation amid concerns around the world about the possible use of the innovation for wrongdoing even as companies are seeking ways to use it to enhance efficiency.

I know this sounds generic, but trust me, it gets worth it very quickly.

In a congressional hearing, FTC Chair Lina Khan and Commissioners Rebecca Slaughter and Alvaro Bedoya were asked about concerns that recent innovation in artificial intelligence, which can be used to produce high quality deep fakes, could be used to make more effective scams or otherwise violate laws.

Bedoya said companies using algorithms or artificial intelligence were not allowed to violate civil rights laws or break rules against unfair and deceptive acts. “It’s not okay to say that your algorithm is a black box” and you can’t explain it, he said.

It is hard to put a limit on the potential ChatGPT and other AI programs could offer actors with fraudulent intent. Because these deep fakes are deep. Remember when a bunch of people were fooled by the AI-generated Pope Jacket picture? The generated arts are working their way into sound as well. And as interesting as it is to see AI mimic Drake and The Weeknd’s lyrical content and flows…

The heat will be on when frauds can use AI to mimic your voice and potentially fool friends, family, and colleagues out of their hard-earned dollars. In a chat about the challenges facing digital forensics in the coming years, FTI managing director Jerry Bui recently told ATL that today’s deep fake artists only need about 3 minutes of audio to recreate a convincing fake voice call. That hot take you uploaded to Twitter and YouTube about the inevitability of Doge going to the moon? That panel you spoke on that you didn’t know was being recorded? Yeah — all ripe for the taking.

Khan agreed the newest versions of AI could be used to turbocharge fraud and scams and any wrongdoing would “should put them on the hook for FTC action.”

Assuming the US doesn’t go the way of France and ban ChatGPT et al., we’re gonna have a hell of a problem on our hands sometime soon.

US FTC Leaders Will Target AI That Violates Civil Rights Or Is Deceptive [Reuters]


Chris Williams became a social media manager and assistant editor for Above the Law in June 2021. Prior to joining the staff, he moonlighted as a minor Memelord™ in the Facebook group Law School Memes for Edgy T14s.  He endured Missouri long enough to graduate from Washington University in St. Louis School of Law. He is a former boatbuilder who cannot swim, a published author on critical race theory, philosophy, and humor, and has a love for cycling that occasionally annoys his peers. You can reach him by email at [email protected] and by tweet at @WritesForRent.


CRM Banner



[ad_2]

Source link