There was an unexpected debate last month when the Supreme Court examined the viability of tech companies' liability shield for tools powered by artificial intelligence, like ChatGPT. Will the shield apply to these tools based on artificial intelligence?
As tech firms scramble to capitalize on the success of the OpenAI chatbot and combine comparable products, as my colleague Will Oremus noted last month, the question, which Justice Neil M. Gorsuch raised during arguments for Gonzalez v. Google, might have broad ramifications.
Nevertheless, according to the two congressmen who sponsored the legislation, The Technology 202, the answer is already obvious: No, they won't be covered by Section 230.
Reps. Ron Wyden and Chris Cox's 1996 legislation exempts digital providers from legal action relating to user content they host. However, as Will stated, courts have frequently ruled that search engines are subject to Section 230 when they link to or publish content from other parties.
Gorsuch, however, speculated last month that these safeguards would not apply to content produced by AI, claiming that the technology "generates polemics today that would constitute content that goes beyond picking, choosing, evaluating, or digesting stuff. That is not protected, either."
Gorsuch's remark sparked a heated discussion that is getting more and more pertinent as more Silicon Valley juggernauts up their AI spending and releases new products. Wyden and Cox assert that Gorsuch was correct, which means that if AI tools are used improperly, businesses may be exposed to a flood of lawsuits.
Wyden (D-Ore.), a fervent supporter of the bill and now a senator, stated in a statement that "AI tools like ChatGPT, Stable Diffusion, and others are being rapidly implemented into popular digital services and should not be covered by Section 230. And it isn't a particularly close call."
In addition, Wyden said that Section 230 "has nothing to do with protecting firms from the consequences of their own actions and products" and that it "is about protecting users and sites for hosting and organizing users' speech." Wyden has advocated mandating businesses to screen AI for biases.
The tech industry trade group NetChoice's board member Cox stated that "Section 230 as written gives a clear norm in this circumstance."
"Section 230 will not be a defense when ChatGPT develops content that is later challenged as illegal," he told me. "A supplier of an interactive computer service must not have participated in the creation or development of the information at issue."
Among other tech firms, Google, Amazon, and Meta are members of NetChoice.
Wyden and Cox's readings could be especially important as Section 230's co-authors when courts consider how to apply the legislation in future AI-related instances.
The Supreme Court is debating whether social networks can be exempt from responsibility for purportedly endorsing terrorist organizations' content in Gonzalez v. Google. The purpose of the case is to evaluate how Section 230 corresponds to business algorithmic suggestions. In addition, Section 230 shields businesses from legal action for "good faith" attempts to stop the spread of harmful content.
In a brief filed to the court, Wyden and Cox claimed that "Section 230 protects targeted recommendations to the same extent that it protects other types of content curation and presentation," and urged it to affirm a lower court finding upholding the safeguards in the case.
The rights should not apply when platforms amplify or suggest content, according to Section 230 detractors, who claim that this should be viewed as their own action.
Wyden and Cox disagreed with those claims, but they are now drawing the line when it comes to content produced by ChatGPT and other AI-powered applications.
Both Stability AI, the owner of Stable Diffusion, and OpenAI, the owner of ChatGPT, did not respond to requests for comment.
As a leading independent research provider, TradeAlgo keeps you connected from anywhere.