In 2018, visitors walked past a booth equipped with AI (artificial intelligence) security cameras using facial recognition technology at the 14th China International Public Safety and Security Exhibition at the China International Exhibition Center in Beijing.
Nicholas Asfori | AFP | Getty Images
The Biden administration is preparing to open a new front in its efforts to protect U.S. artificial intelligence from China, with tentative plans to erect guardrails around the most advanced artificial intelligence models that are the core software of artificial intelligence systems such as ChatGPT, sources said.
The U.S. Commerce Department is considering new regulatory measures to restrict the export of proprietary or closed-source artificial intelligence models whose software and training materials are kept secret, three people familiar with the matter said.
Any action would be in addition to a series of measures taken over the past two years to block the export of advanced artificial intelligence chips to China in an effort to slow down Beijing’s development of cutting-edge technology for military purposes. Even so, regulators have struggled to keep up with the industry’s rapid growth.
The U.S. Commerce Department declined to comment. The Chinese Embassy in Washington did not immediately respond to a request for comment.
For now, nothing can stop American AI giants like this Microsoft-Support OpenAI, alphabetical Google DeepMind and rival Anthropic develop some of the most powerful closed-source artificial intelligence models and sell them to almost anyone in the world without government oversight.
Government and private sector researchers worry that U.S. adversaries could use the models, which mine vast amounts of text and images to summarize information and generate content, to launch aggressive cyberattacks or even create powerful biological weapons.
To develop export controls on artificial intelligence models, the U.S. may adopt thresholds contained in an executive order on artificial intelligence issued in October last year, which is based on the computing power required to train the models, sources said. When this level is reached, developers must report their AI model development plans to the Department of Commerce and provide test results to the Department.
Two U.S. officials and another source briefed on the discussions said computing power thresholds could form the basis for determining which artificial intelligence models are subject to export restrictions. They declined to be named because the details have not been made public.
If used, it would likely only limit exports of unreleased models because the threshold is not believed to have been reached, although Google’s Gemini Ultra is thought to be close to it, according to EpochAI, a research group that tracks trends in artificial intelligence.
Sources stressed that the agency is far from finalizing its proposed rules. But the fact that such a measure is being considered suggests that the U.S. government is seeking to close the gap in efforts to thwart Beijing’s artificial intelligence ambitions, despite the serious challenges of enforcing a strong regulatory regime on the rapidly evolving technology.
Former National Security Council member Peter Harrell said that as the Biden administration looks at competition with China and the dangers of sophisticated artificial intelligence, artificial intelligence models are “obviously one of the tools you need to consider and one of the tools you need to One of the potential bottlenecks to consider.” Official. “Whether, in fact, it can actually be turned into a bottleneck for export controls remains to be seen,” he added.
Biological weapons and cyberattacks?
The U.S. intelligence community, think tanks, and academia are increasingly concerned about the risks posed by foreign bad actors acquiring advanced artificial intelligence capabilities. Researchers from Gryphon Scientific and Rand Corporation say advanced artificial intelligence models could provide information that could help create bioweapons.
The U.S. Department of Homeland Security said in its 2024 Homeland Threat Assessment that cyber actors may use artificial intelligence to “develop new tools” to “enable larger, faster, more efficient and more evasive cyber attacks.” .
One source said any new export rules could also target other countries.
Brian Holmes, an official with the Office of the Director of National Intelligence, said at a U.S. export control conference: “There may be explosive growth in the use and utilization of artificial intelligence, and it will be difficult for us to actually follow this trend.” III in March, flagging China’s progress as a matter of particular concern.
Artificial intelligence strikes
To address these concerns, the U.S. Take measures to prevent American artificial intelligence chips and their manufacturing tools from flowing into China.
It also proposes a rule that would require U.S. cloud companies to tell the government when foreign customers use their services to train powerful artificial intelligence models that can be used in cyberattacks.
But so far, it has not solved the problem of artificial intelligence models themselves. Alan Estevez, who oversees U.S. export policy at the U.S. Department of Commerce, said in December that the agency was considering options to regulate exports of open source large language models (LLMs) before seeking industry feedback.
Tim Fist, an artificial intelligence policy expert at the Washington think tank CNAS, said the threshold “is a good interim measure until we develop better ways to measure the capabilities and risks of new models.”
Thresholds are not set in stone. One of the sources said the Commerce Department may end up with a lower floor, combined with other factors such as the type of material or the potential use of artificial intelligence models, such as the ability to design proteins that could be used to create bioweapons.
Regardless of the threshold, the output of the AI model will be difficult to control. Many of the models are open source, which means they remain outside the scope of the export controls under consideration.
Riot said enforcing controls even on more advanced proprietary models will be challenging because regulators may struggle to set the right standards for determining which models should be controlled. about.