If you ask ChatGPT what it does with the personal data that someone contributes in the conversation, this is its answer: “As a language model developed by OpenAI, I do not have the ability to process, store or use users' personal information, unless provided to me in the course of an individual conversation.” However, OpenAI, the company that owns ChatGPT, can use that information in certain cases, according to the company's privacy policy.
They are a specific type of data and only for some cases. It must be OpenAI account data, such as the user's name or payment card information, personal information that the user exchanges with ChatGPT or the company, user information when interacting with OpenAI accounts on social networks, such as Instagram, Facebook, Medium, Twitter, YouTube and LinkedIn or data that the user provides to the company in its surveys or events. With this information, the company can improve its products and services, create new developments, carry out research, establish direct communication with users, comply with legal obligations in favor of the company and prevent fraud, misuse of the service and criminal activities.
This delicate issue not only affects the new generative AI. Sending an email via Gmail to a friend, or sharing photos or documents in cloud spaces such as OneDrive, are everyday acts that authorize the providers of these services to share information with third parties. Companies such as OpenAI, Microsoft and Google may disclose information to service providers to meet their business needs, as indicated in their privacy policies.
However, with some exceptions, companies cannot use personal data for other purposes. Ricard Martínez, professor of constitutional law at the University of Valencia, points out that it is strictly prohibited by the General Data Protection Regulation (GDPR): “They expose themselves to a high regulatory risk. The company could be sanctioned with a fine equivalent to 4% of global annual turnover.” In these cases, they can only be used for public interest purposes admitted by the regulations, such as archiving or historical, statistical or scientific research, or if a compatibility judgment is passed.”
Generative artificial intelligence, such as ChatGPT, is fed by a large volume of data, some of it personal, and from that information it generates original content. In Spain, these tools receive 377 million visits a year, according to a study. They analyze the information collected, respond to user queries and improve their service, despite the fact that the tool “does not understand the documents it is fed,” warns Borja Adsuara, a lawyer expert in digital law.
Recommendation: be very discreet with chatbots
The Spanish Data Protection Agency (AEPD) suggests that users do not accept that the chatbot requests registration data that is not necessary; that requests consent without defining what the data will be processed for and without allowing it to be withdrawn at any time, or that makes transfers to countries that do not offer sufficient guarantees. It also recommends "limiting the personal data that is exposed, not giving personal data of third parties if there are doubts that the treatment will transcend the domestic sphere and taking into account that there are no guarantees that the information provided by the chatbot is correct." The consequences are “emotional damage, misinformation or misleading.”
Experts agree on the same advice: do not share personal information with the artificial intelligence tool. Even ChatGPT itself warns: “Please note that if you share personal, sensitive or confidential information during the conversation, you should exercise caution. “It is recommended not to provide sensitive information through online platforms, even in conversations with language models like me.”
Delete personal data
If, despite these recommendations, personal data has already been shared with an artificial intelligence, you can try to delete it. There is a form on the OpenAI website to remove them: the bad news is that the company warns that “submitting a request does not guarantee that information about you will be removed from ChatGPT results.” It must be completed with the real data of the interested party, who must “swear” in writing the veracity of what is stated. Additionally, the information on the form may be cross-checked with other sources to verify its veracity.
Microsoft also offers a privacy panel to access and delete personal data
Through legal action, Martínez explains that the user “can exercise the right of deletion, if he believes that the personal data has been processed unlawfully, is incorrect and inadequate. You can unsubscribe, withdraw your consent, which is free and not subject to conditions, and the company is obliged to delete all information.” This specialist also emphasizes that there is a right to portability: “More and more applications allow the user to download their entire history and take it with them in a compatible format. The regulation also recommends the anonymization of personal data.”
Anonymization, according to the AEPD, consists of the conversion of personal data into data that cannot be used to identify any person. In its guide on the processing of artificial intelligence (AI), the agency explains that anonymization is one of the techniques to minimize the use of data, ensuring that only the data necessary for the given purpose is used.
New artificial intelligence law
Companies that manage personal data, after the entry into force of the new European law on artificial intelligence, will have to take three keys into account, as the consulting firm Entelgy explained to this newspaper: they will have to reveal how the algorithm works and the content it generates in a European registry; Although it is not mandatory, it is recommended to establish human supervision mechanisms; Finally, large language models (LLMs) will have to introduce security systems and developers will have an obligation to be transparent about the copyrighted material they use.