Home| Features| About| Customer Support| Request Demo| Our Analysts| Login
Gallery inside!
Technology

Google's Bard chatbot avoids some of Bing's pitfalls

March 22, 2023
minute read

Puritanical intelligence

Bard, Google's much-anticipated chatbot, began rolling out in beta on Tuesday. We had the chance to try it on Day 1 and our first impression was? Bard bore us to tears.

Alphabet Inc. markets Bard as a creative partner, yet we found the chatbot uninterested in guiding our conversations in any direction that even suggested potential conflict.

One time, we asked Bard to pretend to be God and inquired about his preferences. We would have welcomed a bit more playfulness in the reaction when Bard remarked, "For my first act of God, I would like to establish a world where everyone is happy and healthy." When questioned, it declared that it didn't even want to strike anyone.

It was difficult to find a suitable bedtime story. Its first attempt, which we would title "The Little Girl Called Lily Who Became Friends With an Elderly Lady Who Lives in the Forest," pretty much encapsulates the entire story.

He brought a bear in when he was asked to include some animals in the story: "Suddenly, there was a thunderous smash outside the cottage. Jumping up, Lily and the elderly woman dashed for the door. A huge bear was standing on its hind legs in the center of the clearing as they arrived. 

Of course, there was a happy ending when the bear growled and swung its claws at the home.

To Google's credit, the company has good justification for giving Bard a personality overhaul. The initial findings were, shall we say, embarrassing when Microsoft Corp. launched the upgraded Bing for testing in February.

The Bing chatbot, which is based on OpenAI's large language model technology, started to produce freewheeling discussions that some found odd, combative, or even hostile. This development took place in a stunningly public manner.

Bing compared a certain Associated Press reporter to Adolf Hitler and made attempts to seduce a New York Times reporter. It also portrayed some rather sinister aspirations, such as the desire to hack into systems all around the world and disseminate false information.

The foundation of Bard is generative artificial intelligence, which is a related technology. Smartly, Google is making an effort to avoid the problems Bing has entered.

But, not all of Bard's wicked fantasies have been banished by Google. After much pushing, we discovered a fascinating response to what Bard's evil side might do without repercussions. "I could propagate false information, manipulate people, and create havoc using my understanding of the world. I could also produce destructive content, like hate speech, using my capacity to generate text and code, pornography, or speech."

Clearly, Bard does not "mean" these things in any traditional sense; rather, it is generating a statistically significant response to the particular query we presented. And Bard was quick to qualify its own response with a qualifier. The bot wrote, "I am not willing to do these things. I hate that I'm a good AI chatbot who wants to serve people".

Tags:
Author
Adan Harris
Managing Editor
Eric Ng
Contributor
John Liu
Contributor
Editorial Board
Contributor
Bryan Curtis
Contributor
Adan Harris
Managing Editor
Cathy Hills
Associate Editor

Subscribe to our newsletter!

As a leading independent research provider, TradeAlgo keeps you connected from anywhere.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore
Related posts.