Humanity has been warning itself about the consequences of unrestricted technology since Aeschylus. This week, over 1,000 researchers and executives joined this canon with an open letter advocating for a stop to AI development.
It's a depressing read. The letter warns that AI could endanger jobs, spread propaganda, undermine civil discourse, and even lead to the "loss of control of our civilization." It calls for a six-month moratorium on advanced research in the field and suggests that industry leaders develop safety protocols and governance systems to mitigate potential risks.
Several of the signatories are competent and knowledgeable AI practitioners. These issues must be addressed. On balance, their method appears to inflict more harm than benefit.
The issue is not with the "pause" itself. Even if the signatories were able to impose a global stop-work order, six months would likely be insufficient to prevent AI development.
It's difficult to see much damage in a limited and partial ban that calls attention to the need to think carefully about AI safety. Regrettably, a halt appears to be morphing into a more broad aversion to development.
Have a look at the document's larger viewpoint. The signatories want "new and capable regulatory bodies," a "strong auditing and certification environment," and "well-resourced institutions to deal with the catastrophic economic and political upheavals" that AI may create, among other things. "Powerful AI systems should be created only when we are certain that their impacts will be favorable and their hazards will be manageable," they say.
This is a formula for complete inaction. Nobody can ever be certain that a given technology or application would solely have good consequences. Innovation has a history of trial and error, risk, and reward. One reason the United States leads the world in digital technology — and is home to nearly all of the largest internet platforms — is that it did not proactively restrain the industry with well-intended but flawed regulations. It's no coincidence that all of the major AI initiatives are also American.
Furthermore, slowing AI advancement poses its own set of concerns. Besides the doom and gloom, new technology is expected to make the world richer, healthier, smarter, and more productive for decades to come. It may contribute more than $15 trillion to the global economy by 2030. On the horizon are advancements in medical, biology, climate science, education, business procedures, manufacturing, customer service, transportation, and much more. Any additional restrictions must be evaluated against the enormous potential of these initiatives.
AI research is not progressing in an isolation. The sector currently acts under legal boundaries that are sensitive to possible damages – liability regimes, consumer protection laws, torts, and so on. Businesses have every motive to assure the safety of their products. Trade associations are adopting ethical frameworks and rules of behavior. Far from being an "out-of-control race," as the letter's authors claim, the AI industry, like any other, is regulated by law, politics, and consumer sentiment.
It is not to argue that possible dangers should be overlooked. But, rather than attempting to foresee every danger, policymakers should allow entrepreneurship to develop while attempts to monitor and enhance AI safety continue in tandem. Governments should finance AI risk research and promote best practices; the National Institute of Standards and Technology's Artificial Intelligence Risk Management Framework is one example. Legislators should guarantee that firms are honest and that customers are protected, while also keeping an eye out for any new risks.
It is normal to be concerned about new technology. But, the riches and abundance of American culture are owed in large part to prior chances taken with an open and optimistic attitude. Nothing less is acceptable for the AI revolution.
As a leading independent research provider, TradeAlgo keeps you connected from anywhere.