New developments and technological advances always raise new questions about data protection. The most recent development is the rise of artificial intelligence, fueled by ChatGPT, a chatbot that is already used by millions of people and has become firmly anchored in many people’s everyday lives. There are also useful applications of ChatGPT for companies. But what about data protection when using AI?
ChatGPT and the problems with data protection
We are now very familiar with the General Data Protection Regulation (GDPR). We keep mentioning it here because it provides the requirements for compliance with data protection guidelines when handling personal data. The problem: ChatGPT is considered a kind of black box when handling personal data. This means that it is not exactly clear what artificial intelligence actually does with the data it receives from users. ChatGPT was developed and published by OpenAI. Although OpenAI has specified data protection guidelines for its AI, the processing of the data is not entirely secure. GPT-4 is currently the latest version, but it does not differ from the previous version in terms of data protection. Therefore, OpenAI still owes us some answers. One example: If I give my address to the AI, does the AI store it anywhere? ChatGPT receives the information for the answers from freely available texts on the Internet and input from people and companies with whom it is trained. So it cannot be ruled out that personal data is also processed in the process. If this is the case, the AI can disclose it on request, because ChatGPT, as an artificial intelligence, has no moral conscience or sense of “right” and “wrong”. It answers our questions. That is its only task and it fulfills it. In Italy, this has led to a drastic step that could also leave its mark in other countries such as Germany.
Ban on ChatGPT in Italy
The Italian data protection authority had temporarily blocked ChatGPT. This step was taken due to a lack of legal basis. Until then, there was no law governing the collection of user data from conversations with an AI and users were therefore unable to obtain information. When a data breach occurred that involved data from conversations with ChatGPT as well as payment information, the authority blocked the use of the chatbot in Italy for the first time. The authority gave OpenAI a deadline to implement certain measures so that ChatGPT was allowed again in Italy. OpenAI implemented these measures. Among them:
- Notes on data processing for artificial intelligence
- Introduction of a legal basis that requires user consent
- Make arrangements for the deletion of data upon request
In addition, an age check is to follow and OpenAI will conduct an information campaign to educate the Italian population about data processing for the purpose of training the AI. All of this also has potential implications for the use of chatbots in Germany. German data protection officers are also skeptical about the security of data when using ChatGPT and AI in general. There are therefore plans for a data protection review of ChatGPT. This could be accompanied by further requirements that OpenAI must fulfill. However, no one is thinking of banning ChatGPT in Germany and the government is ruling it out. And that’s a good thing, because the first companies are already using ChatGPT in their processes and structures and a ban would lead to difficulties. So instead of threatening a ban, Germany is trying to find possible solutions that guarantee data protection. However, this still poses certain problems, as important information that is actually firmly anchored in data protection and copyright law is missing. For example, ChatGPT does not specify any data sources, which means that users do not know where the information comes from. This is a problem, especially for further use. It is also not yet clear how the data processing algorithms work and whether data is passed on to third parties for commercial reasons.
Conclusion
ChatGPT and artificial intelligence as a whole are just becoming part of our everyday lives. Therefore, there are still some uncertainties regarding the laws, rights and guidelines for use. Both for users and for developers. As it is still rather unclear how exactly artificial intelligence processes data, it is generally advisable to keep personal data to yourself. After all, neither the AI nor the users need to know everything.