Jakub Bolzycki | Noor Photos | Getty Images
As industry concerns over the safety and ethics of artificial intelligence grow, two of the most valuable artificial intelligence startups, OpenAI and Anthropic, have agreed to let the American Artificial Intelligence Safety Institute test their new models before releasing them to the public.
The institute, part of the National Institute of Standards and Technology (NIST) Department of Commerce, said in a statement Press release It will provide “access to each company’s major new models before and after their public release.”
The group was formed after the Biden-Harris administration issued the U.S. government’s first-ever executive order on artificial intelligence in October 2023, calling for new safety assessments, equity and labor market impacts of artificial intelligence. Civil rights guidance and research.
“We are pleased to have reached an agreement with the National Institute for Safety in Artificial Intelligence to conduct pre-release testing of our future models,” OpenAI CEO Sam Altman wrote in a statement. postal OpenAI also confirmed to CNBC on Thursday that the company’s number of weekly active users has doubled over the past year to 200 million. Axios was the first to report this number.
The news comes just a day after reports indicated that OpenAI was negotiating a round of financing that would value the company at more than $100 billion. Thrive Capital is the lead investor in this round and will invest $1 billion, according to a person familiar with the matter who asked not to be identified because the details are confidential.
Anthropic was founded by former OpenAI research executives and employees and was most recently valued at $18.4 billion. Anthropic Count Amazon As a leading investor, OpenAI is strongly supported by Microsoft.
According to reports on Thursday, the agreement between the government, OpenAI and Anthropic “will facilitate collaborative research on how to assess capability and security risks and ways to mitigate those risks.” release.
“We strongly support the mission of the National Institute for Artificial Intelligence Security and look forward to working together to provide security best practices and standards for artificial intelligence models,” Jason Kwon, chief strategy officer at OpenAI, told CNBC in a statement.
Anthropic co-founder Jack Clark said the company’s “partnership with the American Artificial Intelligence Security Institute leverages their extensive expertise to rigorously test our models before widespread deployment” and “enhances our ability to identify and mitigate risks, Promote responsible artificial intelligence development.
Many AI developers and researchers have expressed concerns about the safety and ethics of the increasingly profitable AI industry. Current and former OpenAI employees published an open letter on June 4 describing potential problems with the rapid development of artificial intelligence and the lack of oversight and whistleblower protections.
“AI companies have strong financial incentives to avoid effective oversight, and we believe bespoke corporate governance structures are insufficient to change this,” they wrote. They added that AI companies “currently have only weak obligations to share them with governments.” information without sharing it with civil society” and cannot “rely on them to share this information voluntarily”.
Days after the letter was published, a source familiar with the matter confirmed to CNBC that the Federal Trade Commission and the Department of Justice would launch antitrust investigations into OpenAI, Microsoft, and Nvidia. FTC Chairman Lina Khan described her agency’s action as “a market investigation into emerging investments and partnerships between artificial intelligence developers and major cloud service providers.”
On Wednesday, California lawmakers Passed A hot AI security bill is headed to Gov. Gavin Newsom’s desk. Newsom, a Democrat, has until Sept. 30 to decide whether to veto the legislation or sign it into law. The bill has been met with skepticism by some.
watch: Google, OpenAI and other companies oppose California’s artificial intelligence safety bill