Observing the new chatbots powered by artificial intelligence promised in rival launches by Microsoft and Google over the last week demonstrates two essential points. First, the sensation that "wow, this may alter everything," And second, the recognition that if chat-based search and similar AI technologies are to have an impact, we will need to place a great deal of trust in them and the organizations that provide them.
When AI provides us with answers, as opposed to mere information on which to make our judgments, we will have to place much more faith in it. This new generation of chat-based search engines is more appropriately termed "answer engines" that may, in a way, "display their job" by providing links to the web pages they serve and summarize. However, for a question-answering engine to be beneficial, we will need to accept its responses at face value most of the time.
The same will be valid for tools that assist in generating text, spreadsheets, code, photos, and everything else we create on our devices, which both Microsoft and Google have pledged to include in their respective productivity packages, Microsoft 365 and Google Workspace.
These technologies and chat-based search are all based on the most recent generation of "generative" AI, which is capable of producing verbal and visual information instead of only analyzing it, as is the case with more conventional AI. And the additional trust it will demand is one of the ways this new generative AI technology is likely to further concentrate power in the hands of the largest tech firms.
All sorts of generative AI will further integrate technology into our lives and work than it now does, answering our queries, composing our memos and speeches, and even creating poetry and art. Due to the massive financial, intellectual, and computing resources required to develop and operate this technology, the firms that manage these AI systems will be the largest and wealthiest in the world.
OpenAI, creator of the ChatGPT chatbot and DALL-E 2 image generator AIs that have generated much of the current buzz, appeared to be an exception: a very small startup that has spurred significant AI progress. However, it has jumped into the arms of Microsoft, which has made multiple rounds of investments, in part due to the necessity to pay for the computer power its systems require to function.
The importance of the increasing concentration of power is amplified by the fact that this technology is both very potent and fundamentally flawed: it has the propensity to convey false information confidently. This means that the first step towards mainstreaming this technology is to construct it, and the second step is to minimize the variety and amount of mistakes it will undoubtedly make.
In other words, trust in AI will become the new moat that significant technology businesses will strive to protect. If you lose the user's confidence frequently enough, they may quit your product. For instance: In November, Meta made the Galactica chat-based AI search engine for scientific information available to the public. Yann LeCun, the main AI scientist at Meta, stated in a recent lecture that the engine's erroneous replies prompted such scathing criticism that it was shut down after three days. Perhaps the engine's intended audience, scientists, was partially to blame.
Galactica was "the result of a research effort, as opposed to anything designed for commercial usage." "says a spokesperson at Meta. Joelle Pineau, managing director of fundamental AI research at Meta, stated in a public statement, "given the tendency of large language models such as Galactica to generate text that may appear authentic, but is inaccurate, and because it has moved beyond the research community, we decided to remove the demo from public access."
On the other hand, demonstrating that your AI is more trustworthy might be a more potent competitive advantage than having the largest, greatest, or fastest database of responses. This appears to be Google's strategy, as the company has underlined in recent announcements and a presentation on Wednesday that as it develops and deploys its chat-based and generative AI systems, it would strive for "Responsible AI," as defined in its "AI Principles" for 2019.
My colleague Joanna Stern offered a good overview of using Microsoft's Bing search engine, and Edge web browser with ChatGPT included last week. You may sign up for a list to test the service, and Google says its chatbot, Bard, will be accessible in the coming months.
In the meantime, you may visit other chat-based search engines to see why confidence in these search engines is so problematic. There is You.com, which will respond to your inquiries via a chatbot, and Andisearch.com, which will describe every article it returns in response to a search.
Even these small services have an air of wonder about them. If you ask You.com's chat module a query such as "Please list the greatest chat AI-based search engines," it can, under the correct circumstances, provide a cohesive and concise response that includes the most prominent companies in this industry. Depending on how you frame the inquiry, it can potentially add nonsensical information to its response.
In my examination, You.com would frequently provide a relatively correct response but add the name of a nonexistent search engine. You.com appeared to be mistaking the names of people mentioned in articles for search engine names, as determined by a search for the fabricated search engine names it generated.
Angela Hoover, the company's chief executive, explains that Andi does not offer search results in a conversation style since it is difficult to ensure that these replies are accurate. "It's been thrilling to see these major businesses confirm that conversational search is the future, but nailing factual accuracy is difficult," she continues. Consequently, Andi regularly provides search results but offers to summarize each page it returns using AI.
Andi's team now comprises less than ten individuals, and she has raised $2.5 million. It's remarkable what such a tiny team has done, but creating trustworthy AI would require vast resources, likely on par with those of Microsoft and Google.
This is due to two factors: Professor of operations management at Johns Hopkins University, who researches human-AI interaction, Tinglong Dai, cites the massive amount of computer infrastructure needed as the first obstacle. This represents tens of thousands of machines in the cloud infrastructures of large technological businesses. A portion of these machines is utilized for training the massive "foundation" models that fuel AI systems with creative capabilities. Others specialize in making the learned models accessible to users, a process that can become more difficult as the number of users increases than the training itself.
According to Dr. Dai, the second point is that it needs significant human resources to regularly test and modify these models so that they don't spew an excessive quantity of gibberish or prejudiced and insulting language.
Google has reportedly asked every employee to test its new chat-based search engine and report any problems with the results it produces. Microsoft, which is already releasing its chat-based search engine to the public in a restricted capacity, is doing public testing of this nature. Microsoft's chat-based search engine is built on ChatGPT, which has already been shown to be sensitive to attempts to "jailbreak" it into generating improper information.
Google's delayed deployment, ChatGPT's sometimes-inaccurate findings, and chat-based Bing's partial or misleading responses may likely be addressed by wide-scale experimentation with these technologies, as only giant tech corporations can do.
"The only reason ChatGPT and other foundational models are so awful at detecting bias and even basic truths is that they are closed systems with no feedback mechanism, "says Dr. Dai. Large technology firms such as Google have decades of experience requesting comments to enhance their algorithmically-generated outcomes. Google Search and Google Maps, for instance, have incorporated similar feedback mechanisms for a long time.
Wikipedia might serve as an illustration for the future of trust in AI systems, according to Dr. Dai. Wikipedia is one of the least algorithmically-generated websites on the Internet. Although the fully human-written and human-edited encyclopedia is not as reliable as primary sources, its users are typically aware of this and nevertheless find it useful. According to Wikipedia, "social solutions" are "Possible solutions to issues such as trust in the output of an algorithm or faith in the output of human Wikipedia editors exist.
However, the Wikipedia model demonstrates that the labor-intensive techniques for establishing trustworthy AI, which businesses such as Google and Meta have deployed for years and at scale in their content moderation systems, are likely to consolidate the influence of current large technological corporations. They possess not only the computational resources but also the human resources necessary to deal with all the false, incomplete, or prejudiced information their AIs will generate.
In other words, establishing trust via the moderation of material provided by AIs may not vary significantly from establishing trust through the moderation of human-generated content. And this is a tough, time-consuming, and resource-intensive endeavor that the largest technology businesses have already demonstrated they can undertake in a manner that few other companies can.
The present media, analyst, and investor frenzy for artificial intelligence is due to the obvious and immediate value of these new types of AI when integrated into a search engine or their numerous other possible uses. This may be a disruptive technology, resetting who harvests attention and where it is directed, posing a threat to Google's search monopoly and creating new markets and revenue streams for Microsoft and others.
According to a recent UBS analysis, ChatGPT AI may have been the fastest service in history to achieve 100 million users, so it's clear that being an aggressive first mover in this sector might matter greatly. It is also evident that a successful first-mover in this field would demand resources that only the largest technology corporations can mobilize.
As a leading independent research provider, TradeAlgo keeps you connected from anywhere.