Home| Features| About| Customer Support| Request Demo| Our Analysts| Login
Gallery inside!
Technology

U.S. Starts Studying How to Regulate AI Systems Like ChatGPT

April 11, 2023
minute read

Biden administration officials announced on Tuesday that it is seeking public input on possible accountability measures related to artificial intelligence (AI) systems, in light of looming concerns about AI's impact on national security and education.

The artificial intelligence program ChatGPT, which recently caught the attention of the public for its ability to write fast answers to a wide range of questions, is, in particular, one that has attracted the attention of U.S. lawmakers as it has grown to become the fastest-growing consumer application in history with over 100 million users each month.

Currently, the National Telecommunications and Information Administration, an agency of the Department of Commerce that advises the White House on telecommunications and information policy, is seeking input regarding the possibility of an AI "accountability mechanism, as there is a growing interest among regulators in such a mechanism."

As part of its investigation, the agency wants to know if there are any measures that could be put in place to provide assurance that "AI systems are legal, effective, ethical, safe, and otherwise trustworthy."

"It is clear that responsible AI systems could have enormous benefits, but only if we address their potential consequences and harms in advance. "In order for these systems to achieve their full potential, companies and consumers will have to be able to trust them in order to make full use of them," said Alan Davidson, Administrator of the NTIA.

Last week, President Joe Biden warned that AI is still largely unknown and that its danger is yet to be determined. "Technology companies have a responsibility, in my view, to ensure that their products are safe before making them available to the public," he said.

In contrast to the ChatGPT, which has been praised by some users for its quick response time and criticized by others for its inaccuracies, the ChatGPT is owned by a California-based company, OpenAI, and backed by Microsoft Corporation (MSFT.O).

In its report, NTIA will examine "efforts to make sure that AI systems work as claimed – and without causing harm" and said the effort will inform the Biden Administration's ongoing effort to make sure the federal government is "ensured that all risks and opportunities related to AI are addressed cohesively and comprehensively."

As a result of an ethics complaint from a technology ethics group called the Center for Artificial Intelligence and Digital Policy, the Federal Trade Commission has asked OpenAI not to release new commercial releases of GPT-4 because it is biased, deceptive, and poses a risk to public safety and privacy.

Tags:
Author
John Liu
Contributor
Eric Ng
Contributor
John Liu
Contributor
Editorial Board
Contributor
Bryan Curtis
Contributor
Adan Harris
Managing Editor
Cathy Hills
Associate Editor

Subscribe to our newsletter!

As a leading independent research provider, TradeAlgo keeps you connected from anywhere.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore
Related posts.