Four artificial intelligence experts have expressed concern after their work was cited in an open letter demanding an urgent pause in the research of artificial intelligence, which is co-signed by Elon Musk.
The letter, dated March 22 and signed by over 1,800 people by Friday, demanded a six-month time limit on the creation of "more powerful" systems than OpenAI's new GPT-4, which is supported by Microsoft (MSFT.O) and is capable of conversing like a human, writing songs, and summarizing lengthy documents.
As soon as ChatGPT's predecessor GPT-4 was released last year, rival companies rushed to come up with products that were similar in functionality.
According to the open letter, artificial intelligence (AI) systems with "human-competitive intelligence" pose serious risks to humanity. This assertion is supported by 12 pieces of research from experts, including academics from universities and current and former staff members of OpenAI, Google (GOOGL.O), and its subsidiary DeepMind.
Since then, civil society groups in the U.S. and EU have pressed lawmakers to rein in OpenAI's research to control it. Requests for comment from OpenAI were not immediately responded to.
Future of Life Institute (FLI), which is funded primarily by the Musk Foundation and is behind the letter, has been accused by critics of putting apocalyptic scenarios over more urgent concerns about artificial intelligence, such as the fact that the machines will be programmed with racism or sexism.
There were a number of papers cited in the report, including "On the Dangers of Stochastic Parrots", which is a well-known paper co-authored by Margaret Mitchell, who formerly led Google's ethical AI research.
The letter was criticized by Mitchell, currently the chief ethical scientist of the AI company Hugging Face, who told Trade Algo that it wasn't clear what constituted "more powerful than GPT4"
The letter "asserts a set of goals and a narrative on AI that advantages the advocates of FLI by accepting a lot of problematic concepts as a given," she claimed. "Some of us don't have the privilege of ignoring current harms."
Timnit Gebru, one of her co-authors, and Emily M. Bender, another co-author, both criticized the letter on Twitter, the latter labeling some of its claims as "unhinged."
The FLI president, Max Tegmark, told Trade Algo that the campaign was not designed to impede OpenAI's ability to gain a competitive advantage in the market.
"It's a bit humorous. I've heard people say, 'Elon Musk is trying to slow down the competition','" he said, adding that Musk did not have any role in the letter's drafting. "In this case, it is not just about one company."
Risks Now
There was also an issue raised by Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, who objected to the fact that her work was mentioned in the letter. Last year, she co-authored a research paper arguing that AI would pose serious risks for the future if it were used widely.
In her research, she argued that AI systems can be used in the context of climate change, nuclear war, and other existential threats today in order to influence decision-making processes.
In an interview with Trade Algo, she said: "AI doesn't need to reach human levels of intelligence for it to exacerbate those risks."
"A lot of non-existent risks are really, really important, but they don't get the same kind of attention that Hollywood-level risks are given."
FLI's Tegmark responded to the criticism of AI by saying both short-term as well as long-term risks need to be considered when evaluating AI.
"I believe that when we cite someone, that simply means that we claim that person has endorsed the statement we've made. It doesn't mean that they endorse the letter, or we endorse everything they believe," he told Reuters.
Dan Hendrycks, the director of the California-based Center for AI Safety, who was also mentioned in the letter, defended its points and told Reuters that it was prudent to take into account black swan events—those that seem uncommon but might have grave repercussions.
There was also a warning in the open letter that generative AI tools could be used to flood the internet with "propaganda and untruths".
It was "pretty rich" for Musk to have signed this, according to Dori-Hacohen, citing a recent rise in misinformation on Twitter following Musk's acquisition of the platform, which was documented by civil society groups such as Common Cause and others.
In the near future, Twitter will implement a new fee structure for access to its research data, which may result in a loss of research funding.
"This has directly affected the work of my lab and that of others who study misinformation and disinformation as well," Dori-Hacohen explained. "It's as if one hand is tied behind our backs and the other one is tied behind our backs."
Twitter and Musk did not immediately respond to requests for comment.
As a leading independent research provider, TradeAlgo keeps you connected from anywhere.