ChatGPT: A Menace To Privacy

[ad_1]

artificial-intelligence-3382507__340

Ed. note: This is the latest installment in a series of posts on motherhood in the legal profession, in partnership with our friends at MothersEsquire. Welcome Ayesha Haq to our pages. Click here if you’d like to donate to MothersEsquire.

Like many other lawyers, I was extremely excited to test out the very first version of ChatGPT when it initially came out. A chatbot that converses and answers all your questions in a human-like manner? What’s not to like! Artificial intelligence (AI) was and is the future and will revolutionize the way we work. And while my use of AI was quite limited, other people around me were taking much more advantage of this tool.

It started off by asking basic questions about laws and regulations or even generic questions surrounding legal ops. But then I realized lawyers and nonlawyers started using it to proofread contracts and even draft full-blown legal agreements. That is where I personally think we took it too far.

How ChatGPT Works

ChatGPT uses machine learning to enhance and improve itself. Essentially, what this means is that it learns through conversations it has with its users. So, anything that a user enters into the chatbot, is automatically stored by the tool. For this reason, any information entered into this tool is stored indefinitely in a database somewhere. Open AI in its terms of use states the following:

You may provide input to the Services (“Input”), and receive output generated and returned by the Services based on the Input (“Output”). Input and Output are collectively “Content.” As between the parties and to the extent permitted by applicable law, you own all Input.

The above statement is a little misleading. If the user owns all input, then input should not be stored in the database without the user’s permission. The terms of use are silent as to how this AI will be storing input and how a user can exercise their rights on the input that they own. Merely stating that the user owns input is not enough.

What Is The Big Deal? 

This should stand out as a big red flag for privacy professionals. Privacy laws around the globe are in place to protect an individual’s personal data or personally identifiable information. As mentioned earlier, legal professionals are using this tool to proofread contracts and even draft contracts. As a result, they are exposing sensitive information and at times inadvertently entering personal information into the system as well. That information shall remain in ChatGPT’s possession to use for whatever purpose they chose to. Well, isn’t that illegal? Technically yes, but Open AI nowhere claims to be compliant with GDPR, CCPA/CPRA, or any other privacy standard for that matter. So the user assumes the risk when using the chatbot, giving ChatGPT an easy escape. Here are a few ways ChatGPT could violate standard privacy regulations:

  1. It does not state a legal basis for processing the personal information it receives.
  2. Users are not given a mechanism to exercise their “right to be forgotten” or “right to amend” personal information.
  3. Personal information is stored indefinitely with no insight on how that data is secured and protected.
  4. ChatGPT gathers information from unknown sources on the internet. If a user has any digital footprint, chances are ChatGPT knows a great deal about that user depending on what is available on the internet. This knowledge may be false, and the user has no recourse to correct, amend, or even delete the false information.

What Are Regulators Doing About It? 

So far there hasn’t been much traction when it comes to regulating AI like ChatGPT. Recently however, Italy’s data protection authority has put OpenAI on notice that privacy laws do apply to artificial intelligence and “has ordered OpenAI to stop processing people’s data locally with immediate effect.” In particular, the Italian DPA states that OpenAI is in direct violation of the General Data Protection Regulation (GDPR). This is the first of many notices that are likely to come about. It won’t be long before regulators across the globe catch up to the pitfalls of AI and start heavily regulating its use.

What Should We Do?

The aim of this article is not to discourage you to use ChatGPT. It is a great tool if used correctly and can definitely improve workflows of different industries and professionals. But with all technology, caution is always advised. Here are some measures we can take while using ChatGPT:

  1. Never enter personal information into ChatGPT. Always redact prior to exposing a legal document for review.
  2. Never enter information about a client or customer. Always create vague scenarios unrelated to the client prior to asking ChatGPT for assistance.
  3. Keep your questions broad and generic so that they cannot be tied to another individual.

There is hope that law-making bodies will understand the ramifications of AI and pass regulations or apply existing regulations to ensure privacy of citizens around the world. But until then, it is our duty to exercise caution while using these tools to our benefit. Especially as legal professionals, we need to be mindful of our confidentiality obligations and ensure that we do not inadvertently breach attorney-client privilege as a result. Compliance with privacy regulations is mandatory if one is dealing with personal information regardless of whether the operation is in the EU, UK, or the United States. Noncompliance is expensive and breaches trust and integrity of the services one is providing. In today’s day and age, data is one’s biggest asset and as lawyers we must do our due diligence to secure and protect it to the best of our ability.


DSC_9876Ayesha Haq is an associate in-house counsel for EZ Web Enterprises, Inc., where she advises on matters pertaining to data privacy particularly on regulations such as CCPA, HIPAA, and GDPR. Her practice at EZ Web includes, but is not limited to, contract management, corporate law, employment law and trademark law. In addition to being a practicing attorney, she is a CQI IRCA Certified Information Security Management System Lead Auditor which qualifies her to manage the information security side of the business at EZ Web. She graduated with a Juris Doctor from the University of Cincinnati College of Law in 2019. She received her Bachelors Honors degree from the University of Toronto in 2016.


CRM Banner

[ad_2]

Source link