Aerial view of the San Francisco skyline and the Golden Gate Bridge in California on October 28, 2021.
Carlos Barria | Reuters
LONDON – The British government is expanding its “cutting-edge” artificial intelligence model testing facilities to the United States as it seeks to boost its profile as a top global player in tackling technology risks and strengthen cooperation with the United States across governments. dispute.
The U.S. government announced on Monday that it will host the Artificial Intelligence Security Summit, a state-backed organization focused on testing advanced artificial intelligence systems to ensure their safety, in San Francisco this summer.
The American Artificial Intelligence Security Institute aims to recruit a team of technical staff led by a research director. In London, the institute currently has a team of 30 people and is chaired by Ian Hogarth, a well-known British technology entrepreneur who founded the concert discovery website Songkick.
British Technology Minister Michelle Donelan said in a statement that the launch of the Artificial Intelligence Safety Summit in the United States “represents the UK’s leadership in artificial intelligence.”
“This is a critical time for the UK to take a global perspective on the risks and potential of AI, strengthen our partnership with the US and pave the way for other countries to leverage our expertise as we continue to lead the world in AI. Artificial Intelligence Safety.
The expansion “will enable the UK to tap into the Bay Area’s rich tech talent, work with the world’s largest AI labs based in London and San Francisco, and solidify its relationship with the United States to promote public interest in AI safety,” the government said.
San Francisco is the hometown of OpenAI. Microsoft– The company behind the viral AI chatbot ChatGPT.
The AI Security Institute was launched in November 2023 during the AI Security Summit, a global event held in Bletchley Park, UK (home of WWII code breakers) to promote AI security cross-border cooperation.
The AI Security Institute’s expansion into the United States comes on the eve of South Korea’s AI Seoul Summit, which was first proposed at last year’s UK summit at Bletchley Park. The Seoul summit will be held on Tuesday and Wednesday.
The government says it has made progress in evaluating cutting-edge AI models from some of the industry’s leading companies since the AI Security Institute was established last November.
The company said on Monday that some artificial intelligence models completed cybersecurity challenges but struggled with more advanced challenges, while some demonstrated PhD-level knowledge of chemistry and biology.
At the same time, all models tested by the study remain highly susceptible to “jailbreaking,” whereby users can trick them into producing responses that are not permitted by content guidelines, and some models produce harmful output even when they do not attempt to circumvent protections.
The government said the models tested were also unable to complete more complex and time-consuming tasks without human supervision.
It did not reveal the name of the artificial intelligence model it tested. The government previously got OpenAI, DeepMind and Anthropic to agree to open their coveted artificial intelligence models to the government to aid research into understanding the risks associated with their systems.
The development comes as the UK faces criticism for not introducing formal regulations on artificial intelligence, while other jurisdictions such as the European Union are racing to develop laws targeting artificial intelligence.
The EU’s landmark Artificial Intelligence Act is the first major legislation of its kind in the field of artificial intelligence. Once approved by all EU member states and comes into effect, it is expected to become a blueprint for global artificial intelligence regulation.