OpenAI complies with data protection regulations as EU AI law moves forward
OpenAI successfully met Italian Garante requirements and lifted Italy’s nearly month-long ChatGPT ban. The company has made several improvements to its services, including clarifying how personal data is used to comply with European data protection laws.
The solution to this problem comes as the European Union moves closer to enacting the Artificial Intelligence Law, which aims to regulate AI technology and may impact generative AI tools in the future.
OpenAI meets warranty requirements
according to a opinion from the Italian Garante, OpenAI solved problems with the Garante and ended the nearly month-long ChatGPT ban in Italy. The warranty tweeted:
“#GarantePrivacy recognizes the steps taken by #OpenAI to reconcile technological advances with respect for individuals’ rights and hopes the company will continue its efforts to comply with European data protection legislation.”
To comply with Garante’s request, OpenAI has done the following:
Although OpenAI has resolved this complaint, this is not the only legal hurdle facing AI companies in the EU.
AI law is getting closer to the law
Before ChatGPT gained 100 million users in two months, the European Commission proposed the EU Artificial Intelligence Law to regulate the development of AI.
This week, almost two years later, MEPs have reportedly agreed to take the EU AI law to the next stage of the legislative process. Lawmakers could be working on details ahead of a vote in the next few months.
The Future of Life Institute publishes a bi-weekly newsletter with the latest developments in the EU AI law and press coverage.
A recent open letter to all FLI AI labs to pause AI development for six months received over 27,000 signatures. Notable names supporting the break include Elon Musk, Steve Wozniak and Yoshua Bengio.
How Could AI Law Affect Generative AI?
Under the EU AI law, AI technology would be classified according to risk levels. Tools that could affect people’s security and rights, such as biometric technology, would have to comply with stricter regulations and government oversight.
Generative AI tools would also need to disclose the use of copyrighted material in training data. Given the pending lawsuits over open-source code and copyrighted art used in training data from GitHub Copilot, StableDiffision, and others, this would be a particularly interesting development.
As with most new legislation, AI companies incur compliance costs to ensure tools meet regulatory requirements. Larger companies can absorb or pass on the additional costs to users than smaller companies, potentially resulting in less innovation from entrepreneurs and underfunded startups.
Featured image: 3rdtimeluckystudio/Shutterstock