Many major technology companies include Microsoft, AmazonOpenAI and OpenAI reached a landmark international agreement on artificial intelligence security at the Seoul Artificial Intelligence Security Summit on Tuesday.
The agreement will see companies from countries including the United States, China, Canada, the United Kingdom, France, South Korea and the United Arab Emirates voluntarily commit to ensuring the safe development of their most advanced artificial intelligence models.
Where they have not already done so, AI model makers will each publish security frameworks outlining how they will measure the risks of their cutting-edge models, such as checking the risk of bad actors misusing the technology.
These frameworks will include “red lines” for tech companies, defining the types of risks associated with cutting-edge artificial intelligence systems that are deemed “intolerable” – these risks include, but are not limited to, automated cyberattacks and bioweapons threats.
In this extreme case, the company said that if they cannot guarantee that these risks are mitigated, they will implement a “kill switch” and stop developing AI models.
British Prime Minister Rishi Sunak said in a statement on Tuesday: “It is unprecedented in the world that so many leading artificial intelligence companies from different parts of the world have agreed to the same commitment to artificial intelligence safety. This is the first time.
“These commitments ensure that the world’s leading AI companies will provide transparency and accountability for their plans to develop safe AI,” he added.
The agreement reached on Tuesday extends a series of commitments made by companies involved in developing artificial intelligence software at the UK Artificial Intelligence Security Summit in Bletchley Park last November.
The companies have agreed to obtain input on these thresholds from “trusted actors”, including national governments where appropriate, before the next AI summit planned to be held in France in early 2025 (France AI Action Summit) These thresholds were published previously.
The commitment reached on Tuesday applies only to so-called “frontier” models. The term refers to the technology behind generative AI systems, such as OpenAI’s GPT family of large language models, which powers the popular ChatGPT AI chatbot.
Since ChatGPT was first launched to the world in November 2022, regulators and technology leaders have become increasingly concerned about the risks to advanced artificial intelligence systems capable of producing text and visual content that are equal to or better than humans.
The European Union is trying to curb unfettered development of artificial intelligence through the Artificial Intelligence Act, which was approved by the EU Council on Tuesday.
However, the UK has yet to propose a formal AI law, instead opting for a “lenient” approach to AI regulation, requiring regulators to apply existing laws to the technology.
The government recently said it would consider legislation for cutting-edge models at some point in the future, but has not yet committed to a timetable for introducing formal laws.