Home| Features| About| Customer Support| Request Demo| Our Analysts| Login
Gallery inside!
Technology

What Google learned from AI Gained Microsoft Good Results

March 7, 2023
minute read

A couple of Google researchers began pushing the business to introduce a chatbot and over two years ago, using technology that was more potent than anything else on the market at the time. They created a conversational computer software that was capable of generating puns about horses and cows, debating philosophy, and joking as to its favorite TV shows.

According to those who heard the researchers' comments, Daniel De Freitas and Noam Shazeer, chatbots like theirs would transform how people browsed the web and interacted with computers thanks to recent advancements in artificial intelligence. 

They urged Google to provide independent researchers access to the chatbot, attempted to integrate it into the Google Assistant digital assistant, and eventually requested that Google make a public demonstration available.

The participants alleged that Google executives rejected them repeatedly, at least once claiming that the program didn't adhere to the company's criteria for the fairness and safety of AI systems. In 2021, the two left Google to create their own business to concentrate on related technologies since they were dissatisfied with the difficulty of making their AI tool available to the general public. They admitted this to their coworkers. 

Now, one of Google's oldest adversaries is putting the company's conservative approach to that very tech to the test. Google helped pioneer the current era of artificial intelligence. Microsoft Corp. revealed intentions to integrate the technology behind the popular chatbot ChatGPT, which has dazzled the globe with its capacity to interact in a humanlike manner, into its Bing search engine last month. ChatGPT was created by a seven-year-old business named OpenAI, which Elon Musk co-founded. It drew on early AI developments done at Google. 

Google is moving toward publicly sharing its own chatbot, which is built in part on technologies Mr. De Freitas and Mr. Shazeer worked on, months after ChatGPT made its debut. The chatbot known as Bard uses data from the internet to provide conversational answers to queries. Google announced on February 6 that it was testing Bard both internally as well as externally with the intention of making it broadly available soon. Additionally, it stated that it planned to incorporate comparable technologies into a few of its search engine results.

Years of controversy surrounding Google's AI efforts—from internal disputes about bias and reliability to the public termination of a staff member last year who asserted that the company's AI had attained sentience—have molded the company's relatively cautious approach. 

According to current and former employees as well as those who are familiar with the company, those incidents made executives cautious of the risks public AI product demos could pose to the company's reputation and the search-advertising business, which generated the majority of the well almost $283 billion in earnings last year at its parent corporation, Alphabet Inc., GOOG -0.36% decrease; red down pointing triangle.

"Google is striving to find a balance between how much risk to take against keeping thought leadership in the globe," said Gaurav Nemade, a former Google product manager who worked on the company's chatbot until 2020.

Messrs. De Freitas and Shazeer turned down requests for interviews via a third party. 

A Google representative stated that while their work was at the time intriguing, there is a significant difference between a study prototype and a trustworthy product that is secure for regular usage. The business also said that when launching AI technologies, it must exercise greater caution than smaller businesses.

The strategy taken by Google may turn out to be wise. As users complained about erroneous answers and occasionally irrational reactions when pushing the chatbot to its limitations, Microsoft announced in February that it will impose new restrictions on the chatbot.

Sundar Pichai, CEO of both Google and Alphabet, stated in an email to Staff members last month that some of the company's most popular products weren't the first to market but gained consumer trust through time.

"We want to focus on building a great product right now and developing it responsibly," Mr. Pichai wrote. "It is going to be a long journey for everyone." 

Google began working on chatbots in 2013, when Larry Page, the company's co-founder and CEO at the time, engaged computer scientist Ray Kurzweil to help publicize the idea that computers may one day outsmart humans, or the "technological singularity."

Mr. Kurzweil started developing several chatbots, including Danielle, which was based on a book he was writing at the time, he subsequently revealed. Through a spokesperson for Kurzweil Technologies Inc., the software company he founded before joining Google, Mr. Kurzweil turned down a request for an interview.

Google recently acquired the British artificial intelligence startup DeepMind, whose goal was to develop software that could mimic human mental powers or artificial general intelligence (AGI).

At the exact same time, academics & engineers pushed corporations like Google to promise not to pursue specific uses of AI, citing worries about the technology's potential to enable widespread monitoring through facial recognition software.

OpenAI was founded in 2015 by a collection of digital entrepreneurs and investors, including Mr. Musk, in part as a response to Google's rising prominence in the industry. OpenAI claimed that its first nonprofit organizational structure was done so that it could ensure that AI was not exploited for commercial gain but rather for the benefit of humanity. (Mr. Musk departed the board of OpenAI in 2018).

After a reaction from employees to the company's development on a U.S. military weapon, Google eventually pledged in 2018 not to deploy its AI technology in such weapons. Project Maven was a Department of Defense program that entailed using AI to automatically detect and track possible drone targets, such as autos. Google abandoned the endeavor.

In order to prevent the proliferation of unfairly biased technology, Mr. Pichai also unveiled a list of seven AI principles for the company to follow. These principles include the need for AI tools to be responsible to people and "developed and tested for safety."

An AI side project was begun at that time by Mr. De Freitas, a Google developer of Brazilian descent who worked on the YouTube video platform. 

According to a video interview that was released to YouTube in January, Mr. De Freitas's fellow researcher Mr. Shazeer revealed that Mr. De Freitas formerly had a fantasy of developing computer programs that could create realistic dialogue. Mr. De Freitas set out to create a chatbot at Google that would more closely resemble human communication than any prior attempts.

The project, which was first known as Meena, was kept a secret for several years as Mr. De Freitas and other Google scientists improved its replies. When Microsoft was forced to stop the public distribution of the Tay chatbot in 2016 after users provoked it into improper responses, such as praise for Adolf Hitler, some staff became concerned internally about the dangers of such programs.

Meena was originally made public in a Google research report published in 2020, which claimed that the chatbot had already been fed 40 billion phrases from public social media interactions.

Based on 8 million websites, OpenAI has created a model that was comparable, called GPT-2. It made a version of the program accessible to academics but initially postponed making it available to the general public because it was worried it would be used to produce a lot of false, prejudiced, or offensive material.

Meena's developers at Google likewise desired to make their technology available, but in a more constrained form than OpenAI had. According to Mr. Nemade, Google leadership turned down the request because the chatbot didn't adhere to the company's AI goals for justice and safety.

A Google representative stated that the chatbot had undergone numerous evaluations and been prohibited from wider dissemination for a number of reasons throughout the years.

The chatbot development team carried on. Longtime Google Brain software developer Mr. Shazeer joined the project, which they called LaMDA, for Linguistic Model for Dialogue Applications. They gave it more information and computational power. Mr. Shazeer had contributed to the creation of the Transformer, a much-touted new sort of AI model that made it simpler to create ever-more-powerful programs like those underlying ChatGPT.

But the technology that underpinned their work quickly gave rise to a public controversy. Timnit Gebru, a well-known AI ethics scientist at Google, said that she was fired in late 2020 as a result of her refusal to recant a study on the dangers of systems like LaMDA and her subsequent complaint in an email to coworkers. Google stated that she wasn't dismissed and that her study wasn't rigorous enough.

Jeff Dean, the head of research at Google, made a point of demonstrating the company's continued commitment to ethical AI development. In May 2021, the firm declared that it was going to double the number of the AI ethics team.

A week after making the pledge, Mr. Pichai spoke on stage at the organization's annual conference and played two recorded chats with LaMDA, which, when instructed, answered questions as though it were the dwarf planet Pluto or even a paper aircraft.

Following a last-minute presentation given to Mr. Pichai, Google researchers developed the instances days before the meeting, according to people briefed on the situation. The chatbot's accuracy was highlighted, along with the company's efforts to reduce the likelihood of abuse.

When developing new technologies like LaMDA, two Google vice presidents stated in a blog post that minimizing such dangers was their top focus. 

According to Blake Lemoine, an engineer Google dismissed last year after he shared talks with the chatbot and declared it was sentient, the firm later considered revealing a version of LaMDA at its major conference in May 2022. According to Mr. Lemoine, the corporation opted against the publication after internal debate over his findings started. Google claims that Mr. Lemoine's worries are unfounded and that his public statements broke its policies on data security and employment.

According to people familiar with the initiatives, Mr. De Freitas and Mr. Shazeer also searched for methods to incorporate LaMDA into Google Assistant, a program the company had introduced four years prior on its Pixel phones and home speaker systems. Every month, more than 500 million users use Assistant to carry out simple tasks like checking the temperature and making appointments.

Those acquainted with the project indicated that the team in charge of Assistant started testing LaMDA to respond to user inquiries. But, according to the persons, Google management decided against releasing the chatbot as a public demo.

Mr. De Freitas and Mr. Shazeer were upset by Google's hesitation to make LaMDA available to the general public and they began measures to quit the firm, according to the people. They also started working on a startup employing a similar technology.

The people claimed that Mr. Pichai personally interfered and asked the pair to continue working on LaMDA without promising to make the chatbot available to the general public. Late in 2021, Mr. De Freitas and Mr. Shazeer left Google and in November of the same year, they formed their new business, Character Technologies Inc.

Users can develop and communicate with chatbots that role-play also as individuals like Socrates or stock types like psychologists using Character's software, which was introduced last year. 

Without going into further detail, Mr. Shazeer stated in the interview that was released to YouTube that "it generated a bit of a commotion inside of Google, but eventually we realized we'd probably have greater luck launching something as a business."

Since Microsoft and OpenAI announced their new partnership, Google has sought to reclaim its position as an AI pioneer.

On the eve of a Microsoft event unveiling the incorporation of OpenAI technologies into Bing, Google unveiled Bard in February. Two days later, Google gave the press and the general public another look at Bard and a search tool that used LaMDA-like AI technology to provide textual results to search queries at an event in Paris, which the firm claimed was originally planned to explore new regional search features.

Google claimed that it frequently reviews the requirements for releasing products and that, given the current level of interest, it decided to make Bard available to testers even if it wasn't flawless. 

According to Elizabeth Reid, the company's vice president of search, Google has also had private demonstrations of search goods since early last year that incorporate replies from generative Tools like LaMDA.

For particular types of questions with no single correct response, or what the firm calls NORA, where the standard blue Google link may well not satisfy the user, the company sees generative AI as particularly useful in search. According to Ms. Reid, the business also sees potential searching use cases for other classes of hard questions, such resolving mathematical equations. 

Executives reported that accuracy remained a problem with this program and many others like it. Such models have a propensity to invent a response when they don't have adequate information, which researchers call "hallucination." Tools built on LaMDA new tech have in some cases reacted with fictional diners or off-topic reactions when asked for suggestions, said people who have utilized the tool.

After some users complained about disturbing discussions with the chatbot built into the search engine, Microsoft last month referred to the new version of Bing as a "work in progress" and made changes, such as limiting the duration of chats, meant to lessen the likelihood that the bot would start spewing aggressive or creepy reactions. In February, Google and Microsoft both released samples of their bots that had factual errors generated by the software.

Ms. Reid compared language models like LaMDA to "talking to a kid" in that they are somewhat childlike. "If the child feels they must respond to you and they don't have a response, they will make a response that seems reasonable."

According to Ms. Reid, Google is still improving its models, including teaching them when to admit ignorance in lieu of providing false information. The company also stated that over time, LaMDA has performed better in terms of KPIs like safety and accuracy.

By depriving websites of visitors, systems like LaMDA, which can condense millions of sites into a small paragraph of text, might further intensify Google's ongoing disputes with big news organizations and other online publishers. According to a source familiar with the situation, Google leaders have stated that the company must use generative AI in results in a manner that doesn't annoy website owners, including the inclusion of source links. 

In February, Google senior vice president Prabhakar Raghavan said, "We've taken special care of the ecosystem concerns." "That's a concern we intend to focus on very heavily."

Tags:
Author
John Liu
Contributor
Eric Ng
Contributor
John Liu
Contributor
Editorial Board
Contributor
Bryan Curtis
Contributor
Adan Harris
Managing Editor
Cathy Hills
Associate Editor

Subscribe to our newsletter!

As a leading independent research provider, TradeAlgo keeps you connected from anywhere.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore
Related posts.