Since Captain Kirk conversed with the ship's computer in the 1967 season of Star Trek, researchers have fantasized about making it possible for people and machines to conduct normal, everyday discussions. After more than 50 years, the biggest corporations are finally close to making this capability available to billions of consumers worldwide. Most notably, Google is rushing to release a competitive chatbot named Bard based on its LaMDA technology while Microsoft is integrating OpenAI's powerful ChatGPT technology into its Bing search engine.
As a longtime researcher of human-computer interfaces and a fan of Star Trek, I think that one of the best methods for humans and robots to communicate is through the use of natural language. On the other hand, I am really worried that conversational AI may be used to effectively and precisely control people if there are no sensible safeguards in place.
I refer to this developing concern as the AI "manipulation problem" to separate it from other problems associated to AI. I think it's now vital for policy makers to focus on. Conversational AI involves a real-time engagement during which an AI system can exert targeted influence on a user, perceive that user's reaction to the influence, and then modify its strategies to maximize impact. This is what makes the problem unique. Although it may seem like an intangible process, we normally refer to this as a dialogue. After all, the greatest way to persuade someone is frequently to speak with them directly, adjusting your arguments as you notice signs of resistance or hesitancy.
The risk is that conversational AI has advanced to the point where automated systems can interact with specific humans in natural-sounding conversations that are convincing and coherent and could be quickly implemented with a specific persuasive purpose. Also, while most present systems rely mostly on text, real-time voice will increasingly be incorporated, allowing for organic spoken interactions between humans and robots. They will also soon have a visible presence thanks to the integration of photorealistic computer faces (often known as "digital humans") that behave, move, and express themselves just like actual people. And while engaging in actual dialog when using online products and services has a lot of advantages, it may also end up being the most effective way to launch influence campaigns driven by AI.
The reality is that we are now living in the age of natural computing, when we frequently contact with "virtual spokespeople" who appear to be real people but are actually created to represent the unique requirements and goals of the organizations that used them. These AI-driven conversational agents could be used by businesses, governments, or criminal organizations to expertly pursue a conversational agenda that aims to persuade you to purchase a specific product, believe a piece of misinformation, or even trick you into disclosing your bank account or other private information.
And believe me when I say that these AI-driven spokesmen will excel at persuasion. Unless prohibited by law, these systems will have access to personal information about you (such as your interests, values, and history) and will make use of it to create dialogue that is intended to engage and persuade you personally. These systems will also be able to monitor your emotional responses in real-time using your webcam to analyze your facial expressions, eye movements, and pupil dilation, all of which can be used to infer your feelings at any given time (unless they are regulated). This means that a virtual representative will be able to modify its techniques based on how you react to each word it says as it engages you in an influence-driven conversation, identifying which approaches are effective.
This risk, you could say, isn't brand-new. The same thing is already done by human salespeople, who assess emotions and modify methods. However take into account that AI systems can already identify reactions that humans are not capable of recognizing. For instance, AI systems can recognize "micro-expressions" that are too subtle to be noticed by humans but reveal inner thoughts through your voice and face. Similar to this, AI systems can detect minute variations in your pupil size and "facial blood flow patterns," both of which are indicators of emotional state. Virtual representatives will be much more aware of our inner feelings than any human spokesperson, unless they are protected by legislation.
Conversational AI will also develop the ability to manipulate you. Unless prohibited by law, these platforms will gather information about your responses during each previous conversational contact in order to determine which strategies worked best for you specifically. By "playing" you over time, these AI systems will learn how to draw you into conversations, influence you to accept new ideas, enrage you, and ultimately persuade you to buy things you don't need or believe things you'd normally realize were absurd. In other words, they will not only adapt to your immediate verbal and emotional responses. Also, these techniques can be utilized to target and have an impact on large populations because this technology will be simple to scale up.
These are, of course, potential threats. There is no reason to think that the conversational systems of today intentionally used deceptive methods. In many ways, the honeymoon phase we're in now is reminiscent to social media's early years before major platforms adopted ad-based economic models. Early on, there was little incentive to use aggressive tracking, profiling, and targeting methods to monetize user data. So, a new danger is that conversational platforms will adopt similar business strategies that emphasize targeted impact. If they do, it might inspire many of the above-mentioned abuses.
For this reason, I think the manipulation issue may be the biggest danger AI poses to society in the near future. Regulators must view this as a serious threat.
Because ChatGPT has the quickest adoption rate in history, despite being introduced less than three months ago, it now has more than 100 million active users. We require barriers to prevent real-time interactive manipulation of the public by conversational AI agents. It approaches quickly.
As a leading independent research provider, TradeAlgo keeps you connected from anywhere.