Home| Features| About| Customer Support| Request Demo| Our Analysts| Login
Gallery inside!
Technology

Fans are quietly embracing custom AI chatbots

March 20, 2023
minute read

Begun, the AI lawsuits have

The Character.AI website lets users create and interact with AI chatbots that represent real people (Tim Cook, Elon Musk). The virtual Master Yoda, for instance, tells you about his home planet, Dagobah, and current events. ("Bankrupt Silicon Valley Bank now. Pensions lost.")

Meanwhile, Hermione Granger from Harry Potter will refrain from naming He Who Must Not Be Named, but she will offer her opinion about Governor Ron DeSantis ("a person of principle and character") and President Joe Biden ("I think he genuinely cares about this country.”)

As these companies wait to see if brazen internet startups once again infringe on their territory, they can wonder whether their creations are being infringed upon.

Character.AI, which was recently valued at $1 billion in a fundraising round, is not the first website blurring the already hazy lines between copyrighted information and legally permitted modifications of that property, known as fair use. Yoda can also be convincingly impersonated by OpenAI's ChatGPT, in addition to Rick from the animated series Rick and Morty. "Well, jeez, Morty, that Ukraine War is a real stinker; it's a convoluted mess."

Sites using Stability AI's open-source engine will readily produce work with copyrighted characters, such as Batman and Spider-Man fighting amid the wreckage of a cyberpunk world, among the so-called AI picture generators that allow users to take photos based on a certain prompt. They may even choose to base the illustration on a certain illustrator's distinctive style.

The creators of these AI services have already been the target of a few intellectual property lawsuits. Stability AI was sued by Getty because it compiled its image database by downloading millions of online pictures, many of which were copyrighted. A San Francisco-based class action law firm representing three artists filed a similar lawsuit against Stability AI, DeviantArt, and Midjourney for using billions of photos protected by copyright to train its model. The use of news stories by OpenAI to train its AI algorithms has also drawn criticism from news providers.

Yet, the criticisms focus on the development and instruction of the input databases for these new AI engines. The interpretation of the output, such as Yoda's ramblings, which were appropriated by Character.AI and its ilk without consent from Walt Disney Co.-owned Lucasfilm and other rights holders, is still up in the air.

Also, it's uncertain how the courts will view these fresh issues. Are fan fiction and the geeky person who dressed up for Comic-Con similar to the virtual Yoda? Or is it an unauthorized derivative work that has the potential to draw in real customers and earn real money in the future?

There are already some signs that internet companies are being cautious. Dall-E, an image generator created by Microsoft Corp.-backed OpenAI, refuses to produce any images of characters that are protected by copyright. To the dismay of some Reddit users, Character.AI places boundaries around what its chatbots can and cannot say. The firm also specifies in its terms of service that users should be the "right, title, and interest in" any content they upload to the platform. Of course, it doesn't seem like the site policies do that.

These conflicts between rights holders and internet companies date back many years. Two decades ago, Napster made it possible to download music for free; later, YouTube made it simple to post movies and TV episodes with copyrights. Google made the decision to scan and post online every book from university libraries along the route.

The tech and media sectors engaged in protracted legal battles over these concerns before, in essence, coming to negotiated agreements and agreeing to split the enormous profits from the sale of copyrighted songs, movies, and books.

So, is everyone now more aware than before? "I think it may well be the case that people understand there is actual technology we want to respect and don't want to eliminate, and they will look for methods to accommodate it," said Mark Lemley, a Stanford University law professor and an IP attorney whose clients include Stability AI.

Yet, as he concedes, "the wrong case can bring all advancement in the industry to a screeching halt" or "an adverse early judgment."

Tags:
Author
Bryan Curtis
Contributor
Eric Ng
Contributor
John Liu
Contributor
Editorial Board
Contributor
Bryan Curtis
Contributor
Adan Harris
Managing Editor
Cathy Hills
Associate Editor

Subscribe to our newsletter!

As a leading independent research provider, TradeAlgo keeps you connected from anywhere.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore
Related posts.