The government of Italy has banned ChatGPT, the AI chatbot developed by OpenAI, saying that it lacks an age verification system and that its collection and processing of user data is in violation of the country’s privacy laws.
The order, made by Italy’s Data Protection Authority, states that ChatGPT users aren’t given any information about the collection and use of their data, and that there’s no “legal underpinning” for that data collection, which it says is used to train ChatGPT. It also says that testing has shown that information provided by ChatGPT “does not always match factual circumstances”—which is true, chatbots are prone to bullshitting—and that while the terms of service limits its use to people over the age of 13, there’s no age verification system in place.
The regulatory body also made note of a “data breach” that occurred on March 20, which it said affected “users’ conversations and information on payments by subscribers.” OpenAI acknowledged the issue on March 24, saying it took the system offline “due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history.”
“It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time,” OpenAI said.
Italy is the first country in the West to ban ChatGPT, although as the BBC notes, it’s already blocked in other countries including China, Russia, Iran, and North Korea. Italy may share some of the same reasons for blocking it as those countries, as it is currently governed by a coalition of right-wing and far-right parties—the sort who might take issue with public access to some “factual” datapoints that paint it in an unflattering light. But Italy is far from alone in having concerns about the rapid growth of ChatGPT and other AI applications.
A group of artificial intelligence experts, industry leaders, and Elon Musk recently published an open letter calling for a six-month pause on training AIs more powerful than GPT-4, which has been dismissed in some quarters as at least partially a publicity stunt. But other agencies are taking more concrete steps: The New York City Department of Education, for instance, said in January that it would restrict access to the software from school networks and devices, and Getty Images has banned the upload and sale of any AI-generated images. The European Consumer Organization has also called for an investigation into ChatGPT technology, and Ireland’s Data Protection Commission told the BBC that it’s reaching out to the Italian regulator for more information on its reasons for the ban, presumably with an eye toward formulating its own policies and restrictions.
But while there’s obvious (and understandable) nervousness about the explosive development of AI and their potential to wreak havoc in all sorts of unpredictable ways, it doesn’t seem likely that it will slow down development of the software, at least in the short term. In January, Microsoft announced plans to invest $10 billion into Open AI, and Google announced its own ChatGPT-like chatbot, called Bard, in February. Regulation is absolutely called for and further bans are almost certainly coming, but AI development, for good or ill, is here to stay.