Home| Features| About| Customer Support| Request Demo| Our Analysts| Login
Gallery inside!
Technology

The Italian Government Has Become The First Western Country To Ban ChatGPT

April 4, 2023
minute read

ChatGPT, the popular artificial intelligence chatbot developed by U.S. startup OpenAI, has just been banned in Italy, making it the first country in the West to do so.

Earlier this week, the Italian Data Protection Watchdog issued an order ordering OpenAI to stop temporarily processing the data of Italian users while it investigated a possible breach of European privacy laws with regards to the processing of their data.

Regulatory body Garante, which is also known by the acronym GV, cited a data breach at OpenAI that allowed users to see the titles of conversations other users were having with the chatbot, which allowed the regulator to investigate the matter further.

Garante released a statement Friday saying that "the massive collection and processing of personal data appears to lack legal grounding," Garante said.

A Guarantor also raised concerns about a lack of age restrictions on ChatGPT, including the possibility that a chatbot could provide actually incorrect information in its responses when responding to queries.

It has been speculated by reputable sources that OpenAI, backed by Microsoft, is at risk of being fined 20 million euros ($21.8 million) or 4% of its global revenue if it has not succeeded in finding a solution within 20 days of being notified of the problem.

In addition to Italy, many other countries are grappling with the rapid pace of technological advancement and the implications it has for society as a whole. It is unquestionably true that other governments are preparing their own AI regulations, which are likely to include generative AI, regardless of whether they mention it explicitly. The concept of generative AI describes a set of artificial intelligence technologies that generate new content as a result of user prompts. Due in large part to the fact that new large language models have been implemented, which are trained on enormous amounts of data, this technology is more advanced than previous iterations of Artificial Intelligence.

A number of people have urged the government to regulate artificial intelligence for a long time now. However, the speed at which the technology has progressed is making it difficult for the government to keep up with its development. Almost anything a computer can do is generate a line of code, an entire essay, or even realistic art in just a matter of seconds. 

In her interview with Trade Algo, Sophie Hackford, a futurist and global technology innovation advisor for American farming equipment manufacturer John Deere, stated that it was very important for us to make sure that we do not create a world where humans are somehow subservient to a greater machine future.

Technology goes to use to serve us. It’s there to speed up the cancer diagnosis process. It’s there to stop humans from having to do jobs that they don’t like doing. It’s here to make our lives easier.

She went on to explain that, from a regulatory perspective, it is crucial that we think about this very carefully now and act on it as soon as possible.

As a result, a variety of regulators, ranging from the FTC to the EPA, are going as deep as possible to study the challenges AI presents to job security, privacy, and equality. There is also concern about the possibility of artificial intelligence generating false information and manipulating political discourse.

There are also several government entities in the world that are starting to think about how to deal with general purpose software systems such as ChatGPT, with some even considering to follow Italy's lead and ban the technology as well.

Britain

There was an announcement last week that the U.K. would be regulating artificial intelligence through existing regulations, instead of establishing new ones. This will allow regulators to apply such existing rules to AI rather than formulating new ones.

Among the key principles outlined in the U.K. proposals, that do not mention ChatGPT by name, are the need for companies to adhere to while using artificial intelligence as part of their products, such as safety, transparency, fairness, accountability, and contestability.

In the present scenario, the United Kingdom isn't proposing any restrictions on ChatGPT, or any kind of artificial intelligence for that matter, but rather wants companies to develop and use AI tools in a responsible way and to provide users with as much information as possible about the reasoning behind the decisions.

Michelle Donelan, the Minister for Digital Affairs and Innovation, has declared that generative AI has gained an enormous following over the past few months, illustrating that risks and opportunities surrounding the technology are "emerging at an extraordinary pace."

This non-statutory approach for government, she noted, will allow the government to react quickly to advancements in artificial intelligence and, if necessary, intervene further if needed as a result.

There was a lot of emphasis on addressing "what good AI usage looks like" when it came to the U.K.'s approach to fraud prevention and combating financial crime, said Dan Holmes, a fraud prevention leader at Feedzai, a company that uses AI to combat financial crime.

The point is, it comes down to two things, that is, transparency and fairness, for you to consider using AI. "And if you are utilizing AI, these are the principles that you should consider," Holmes told Trade Algo.

The EU

The rest of Europe is expected to take an even more restrictive stance on the use of artificial intelligence than the British government has so far, which has been increasingly deviating from EU-based digital laws since the United Kingdom recently left the European Union.

There has been a recent proposal from the European Union on Artificial Intelligence legislation which could be a game breaker for the sector, as it is often at the forefront of tech regulation.

There have been several restrictions on the use of AI in a number of critical infrastructure, education, law enforcement, and judicial functions as a result of the European Artificial Intelligence Act.

In addition to the General Data Protection Regulation, which is part of the European Union's data protection legislation, the Act will also provide guidance on how personal information can be processed and stored.

It is valuable to note that officials haven't taken into account the breakneck progress of artificial intelligence systems that have the capability of generating amazing art, poems, stories, jokes, and poems when the AI act was first conceived.

Reuters also reports that the EU's proposed regulations consider ChatGPT as a form of general purpose AI that is being used in applications that are considered high risk. The commission defines high risk systems as ones that may adversely affect people's fundamental rights or safety.

To make sure that they comply with these requirements, harsh risk assessments would be conducted and they would need to make sure there are no discriminatory effects arising from the algorithms feeding the dataset. 

In an interview with Trade Algo, Max Heinemeyer, chief product officer of Darktrace, said it's not a new discussion for them, because they have a great deal of expertise when it comes to artificial intelligence. They've got access to some of the best talents in the world.

There is no doubt that these technologies have the potential to bring some competitive advantages to the member states, along with the potential risks. Therefore, it is worthy to trust that they have the best interest of the member states at heart."

Despite the fact that Brussels is working on AI laws, many EU countries are already considering Italy's actions on ChatGPT and considering whether to follow suit while Brussels is working on AI laws.

According to Ulrich Kelber, the Federal Commissioner for Data Protection in Germany, the application of a similar procedure is also possible in principle in Germany, according to a report in Handelsblatt.

In order to find out more about the findings of the data protection investigation, French and Irish regulators are in touch with their Italian counterparts. Reuters reports that Sweden’s data protection authority has not ruled out a ban on OpenAI. As OpenAI does not have a single office within the European Union, Italy is free to proceed with a ban.

As most American tech giants like Meta and Google have their offices in Ireland, the country is usually the most active regulator when it comes to data privacy.

The United States.

There are currently no formal rules in place for the U.S. government to oversee the technology used in artificial intelligence.

A new national framework developed by the country's National Institute of Science and Technology provides guidance on managing risks and potential harm associated with the use, design and deployment of artificial intelligence systems by companies.

Nevertheless, it operates on a voluntary basis, which means any failure to comply with the rules would not lead to any repercussions for the company.

The situation with ChatGPT in the United States has not been halted yet, and no action has been taken to limit it.

An American nonprofit research group filed a complaint with the Federal Trade Commission last month alleging that GPT-4, OpenAI's newest large language model, is biased, misleading, and poses a risk to privacy and public safety. The complaint also alleges that GPT-4 violates guidelines set by the FTC on artificial intelligence.

Depending on how the FTC responds to the complaint, it may result in an investigation into OpenAI; the latter might be suspended from commercial deployment of the language models it has developed.

China

A majority of the countries with heavy internet censorship, such as China, Iran, North Korea and Russia, are not able to join ChatGPT; even though it is not officially blocked, OpenAI is not allowing any users from those countries to register.

In Chinese tech circles, several large firms have announced plans to create ChatGPT alternatives. Baidu, Alibaba and JD.com, some of the largest names in the Chinese tech industry, have announced plans for competing services.

China has seen it as one of its priorities to ensure that its technology giants comply with the strict regulations it imposes on their development of products.

There has been a first-of-its-kind regulation introduced by Beijing last month in order to prohibit the production of fake pictures, videos, or texts that are synthetically generated or altered using artificial intelligence.

One of the requirements stipulated by Chinese regulators in their previous amendments to the regulation of recommendation algorithms is that companies must file with the cyberspace regulator details in relation to the algorithms they use.

Regulatory regulations of this type are theoretically applicable to any form of ChatGPT-style technology that is available today.

Tags:
Author
Eric Ng
Contributor
Eric Ng
Contributor
John Liu
Contributor
Editorial Board
Contributor
Bryan Curtis
Contributor
Adan Harris
Managing Editor
Cathy Hills
Associate Editor

Subscribe to our newsletter!

As a leading independent research provider, TradeAlgo keeps you connected from anywhere.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore
Related posts.