Vint Cerf, who has been called “the father of the internet” by some, has a message for business executives looking to rush into business deals regarding chat artificial intelligence: “Don’t.”
During a conference on Monday in Mountain View, Cerf pleaded with attendees not to invest in conversational AI because "it's a hot topic." Cerf's warning came at the same time as ChatGPT gained a lot of popularity.
Cerf told the conference crowd Monday that there is an ethical issue here. He said, referring to Google's Bard conversational AI announced last week, that everyone is talking about ChatGPT or Google's version of that and it does not always work the way we would like it to.
The warning comes at a time when large tech companies like Google, Meta, and Microsoft are grappling with how to stay competitive in the conversational AI space while rapidly improving a technology that makes mistakes still frequently on a daily basis.
Earlier in the day, Alphabet Chairman John Hennessy said that the systems are still a long way from being widely useful and that they still have many issues with inaccuracy and “toxicity” they need to address before even the public is allowed to use them.
He has been serving in the role of vice president and "chief Internet evangelist" for Google since 2005. A "Father of the Internet", he co-designed some of the architecture that built the foundation for the internet.
Cerf warned investors against being tempted by the vision of investing just because the technology is "really cool, even if it doesn't work quite right all of the time."
As Cerf said, “Don't think that you can sell this to investors because it is a hot topic and everyone will throw money at you,' but don't do that,” which resulted in some laughter from the crowd. “Be thoughtful.” You were right that we can't always predict what's going to happen with these technologies, but to be honest with you, the biggest problem is people—and that's why there has been so little change in people for the last 400 years, let alone four thousand years.”
“They will do that which is in their best interest and not in yours,” Cerf said, indicating that he was referring to human greed in general. “We need to keep that in mind when using these technologies and be thoughtful in the way they are used.”
As Cerf described, he attempted to ask one of the systems to add an emoji to each sentence at the end of it. It did not do that, and when he complained to the system, it apologized, but it did not change its behavior. The chatbot, he said, is still far from being aware of itself or self-aware.
It seems to me that there is a gap between what it says it will do and what it actually does, he said. "That's the problem... you can't tell the difference between an eloquently expressed" response and one that is accurate," he said.
Cerf gave the example of a chatbot in which he asked for an accurate biography about himself. He said that the bot presented its answer as factual, even though some inaccuracies were found in it.
“It is my belief that engineers, such as myself, have a responsibility to find a solution to some of these problems so that they will have a lower likelihood of causing harm in the future. It is also important to keep in mind that depending on the application, a not-so-good fiction story is one thing. Giving advice to someone... can have serious medical consequences. Identifying the worst-case scenarios is very important."
As a leading independent research provider, TradeAlgo keeps you connected from anywhere.