Home| Features| About| Customer Support| Request Demo| Our Analysts| Login
Gallery inside!
Technology

Europe pushes for new regulation on ChatGPT and Advanced AI

April 18, 2023
minute read

Several legislators from across Europe are looking to give regulators more power to regulate the advancement of technological advances such as the technology behind ChatGPT, which is considered one of the hottest areas in artificial intelligence today. This is one of the biggest pushes in the West so far to curb this technology. 

A group of influential EU lawmakers says that with the rapid pace of AI development in recent months, it is necessary for the development of a new set of rules. In an open letter they plan to publish on Monday, they argue in favor of a new set of rules targeted at powerful, general-purpose artificial intelligence tools.

As part of the task of crafting a new draft of what the bloc calls its AI Act, the group of lawmakers is committed to adding provisions to the legislation that will help to steer the development of powerful artificial intelligence in a way that is safe, trustworthy, and human-centered. According to Trade Algo's analysis of the letter, the letter is aimed at “steering the development of very powerful artificial intelligence in a way that is safe and trustworthy.” 

There is a need for significant political attention as artificial intelligence advances rapidly, according to the lawmakers. Their position is that, according to them, the EU's pending legislation, which policy makers hope to pass into law by the end of this year, could provide a blueprint for a number of similar regulations across a variety of different regulatory cultures and environments.

In addition, the statement from the EU Parliament, which comes from the very strong voices that have been asked by some researchers and technologists for regulators to pause or slow down the development of very powerful AI tools in order to allow regulators to catch up, adds new momentum to the calls by some researchers and technologists. 

It was just last month that a group of scientists and tech executives from the Future of Life Institute, led by Elon Musk, signed an open letter calling for a six-month moratorium on training the next generation of artificial intelligence tools, so regulators and the industry could establish safety standards by that time. In addition to writing an open letter last week, a group of AI ethics and researchers urged the European Union to include provisions that address the potential risks associated with “general purpose artificial intelligence.”

Among the members of the European Parliament who have partnered up to lead the body's work on the Artificial Intelligence Act is Dragos Tudorache, a Romanian representative of the Parliament along with Italian member Brando Benifei. According to Tudorache, the group drafted Monday's letter as a response to the Future of Life Institute's letter. In addition to urging regulators to step up work with AI developers to address the possibility of a loss of control of our civilization, that letter also called for a pause in training advanced AI models. 

EU lawmakers said in Monday's letter that they agreed with some of the concerns expressed in this letter from the Institute, although they disagreed with some of the more alarmist statements contained in it. According to them, regulation can be an effective method to help humanity reap the benefits of artificial intelligence, thereby avoiding potentially more challenging future scenarios. 

"By working together, we can change history in a positive way," they said. 

Getting closer to the end of the debate over the EU's artificial intelligence bill, more regulators globally are getting involved in this debate. Chinese Internet regulators earlier this month proposed rules for a system similar to ChatGPT to need to be controlled.

A recent report in the U.S. has indicated that the Biden administration is considering whether the tools should be checked to ensure their safety. A recent report from the U.K. suggests that regulators supervise the development of artificial intelligence with an emphasis on its safety, transparency and fairness. Last month, Italy temporarily banned ChatGPT for violating privacy rules.

As of yet, it has not been confirmed what new provisions will be added to the AI Act by the EU Parliament. In the letter published on Monday, the author asserts that these rules are needed to be able to train foundation models, which are based on massive amounts of data that are used to support some of the most recent advances in the field. There are some large language models included in foundation models, such as those used for tools such as ChatGPT by Microsoft Corp.-backed startup OpenAI, which can answer textual questions in a coherent manner.  

To come up with a common position, the signatories of Monday’s letter will meet this week to discuss their proposals and vote on it in May by the full Parliament. Many European Parliamentarians have argued in the past that such powerful artificial intelligence tools should be transparently built, the data used to build them must be fair, and ongoing safety and predictability audits should take place. 

Following the conclusion of a draft by the European Parliament, which is expected to happen next month, the Council of the European Union, which represents the EU members, will negotiate the details of the draft with them. The Council settled on its own draft AI Act late last year, but its draft left open the possibility of specifying any specific requirements for general-purpose AI to be determined by the European Commission, the bloc's executive arm. The bill needs to be approved by both chambers in order to become a law—something that policy makers believe will happen within this year if both chambers approve a compromise text. 

As the rotating presidency of the Council is held by the Swedish government, which calls the legislation a priority, it said it is prepared to convoke discussions as soon as the Parliament has decided upon its position on the issue.

It has been acknowledged by the European Commission that specific rules for AI used for general purposes should be considered and the Commission will support lawmakers in this endeavor, an official said today.

As a matter of fact, if any new rules were added to the AI Act that govern the development of general-purpose AI tools like foundation models, then it would represent a significant shift in the way that AI tools should be developed. As of now, there have been a number of tough rules that have been reserved for applications of AI that are deemed risky by the EU, such as a ban on most police officers using facial recognition software. The policy makers have called this approach a risk-based approach. However, the new rules that apply to foundation models would be applicable to all technologies even when they are put to the use of various end-use applications. 

Companies in the IT industry and their lobbyists have argued in the past that the law should focus on specific artificial intelligence applications based upon risk, rather than imposing too many restrictions on AI development in order to hinder innovation. It is worth noting that some tech researchers, along with academics and technologists, have backed a bill that will slow down the pace by which companies roll out advanced new artificial intelligence tools.

According to the current draft, companies that fail to comply with the new bill can be fined up to 6% of their global revenue at the time of the violation. 

EU lawmakers were also encouraged to take action as a result of the open letter published on Monday. It was urged that the EU’s executive arm and President Biden convene a high-level global summit on artificial intelligence in order to establish a set of preliminary principles for how to implement the technology. In their view, the next meeting of the U.S.-EU Trade and Technology Council, which meets again next month, should also be an excellent time to formulate an agenda for the summit.

A number of legislative representatives called on companies and AI laboratories to "significantly increase transparency towards regulators and to engage in dialogue with them" and "to make sure that they maintain control over the evolution of the artificial intelligence they are developing.". 

Tags:
Author
John Liu
Contributor
Eric Ng
Contributor
John Liu
Contributor
Editorial Board
Contributor
Bryan Curtis
Contributor
Adan Harris
Managing Editor
Cathy Hills
Associate Editor

Subscribe to our newsletter!

As a leading independent research provider, TradeAlgo keeps you connected from anywhere.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore
Related posts.