January 14, 2025

Jacques Silva | Noor Photos | Getty Images

LONDON — Britain says it wants to do “its own thing” when it comes to regulating artificial intelligence, suggesting it may differ from the approach taken by its leading Western nations.

“It’s really important that the UK does its thing in terms of regulation,” Feryal Clark, the UK’s minister for artificial intelligence and digital government, said in a CNBC interview broadcast on Tuesday.

She added that the government already has “good relationships” with artificial intelligence companies such as OpenAI and Google DeepMind, which voluntarily open their models to the government for safety testing.

Clark added: “It’s important that we consider safety from the outset of model development… which is why we will work with the department on any safety measures.”

Minister says UK can do its own thing on AI regulation

Her comments echoed comments made by British Prime Minister Keir Starmer on Monday that the UK “now has the freedom to regulate in the way we think is best for the UK” after Brexit.

“There are different models around the world, there’s the EU approach and there’s the US approach, but we have the ability to choose the model that we think is in our best interests and we intend to do that,” Starmer said after announcing the UK would become an AI The global leader answered questions from reporters after launching his 50-point plan.

Differences with the United States and the European Union

So far, the UK has not introduced formal laws to regulate artificial intelligence, but instead follows individual regulators enforcing existing rules on companies regarding the development and use of artificial intelligence.

This differs from the EU, which has introduced comprehensive pan-European legislation aimed at harmonizing technical rules across the EU, taking a risk-based approach to regulation.

At the same time, the United States The federal level lacks any AI regulation, instead employing a patchwork regulatory framework at the state and local levels.

During Starmer’s election campaign last year, Labor pledged in its manifesto to introduce regulation of so-called “cutting-edge” artificial intelligence models – referring to large-scale language models such as OpenAI’s GPT.

However, the UK has so far not confirmed the details of the proposed AI safety legislation, instead saying it will consult with industry before proposing formal rules.

“We will work with the industry to develop this project and move forward with it in line with what we said in our manifesto,” Clark told CNBC.

Chris Mooney, partner and commercial director at London-based law firm Marriott Harrison, told CNBC that while the EU is moving forward with its AI bill, the UK has taken a “wait-and-see” approach to AI regulation.

Mooney told CNBC via email: “While the UK government has stated that it has a ‘pro-innovation’ approach to AI regulation, our experience working with clients is that they find the current situation uncertain and therefore not exciting. satisfy.

One area where Starmer’s government has made public plans to reform the rules on artificial intelligence is copyright.

Late last year, the UK launched a consultation to review the country’s copyright framework to assess possible exceptions to existing rules for AI developers to use the works of artists and media publishers to train models.

Businesses face uncertainty

Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC that while the government’s AI action plan “shows ambition,” its implementation without clear rules is “almost reckless.”

“We’ve missed critical regulatory windows twice — first with cloud computing and second with social media,” Dugar said. “We can’t make the same mistake with artificial intelligence because the risks of artificial intelligence are exponentially greater.”

“Britain’s data is our crown jewel; it should be used to build sovereign AI capabilities and create a British success story, not just to power overseas algorithms that we cannot effectively regulate or control,” he added.

Details of Labour’s artificial intelligence legislation plan It was originally expected to appear in King Charles III’s speech at the opening of the British Parliament last year.

However, the government has only promised to establish “appropriate legislation” for the most powerful AI models.

“The UK government needs clarification here,” John Buyers, international head of artificial intelligence at law firm Osborne Clarke, told CNBC. He added that he had heard from sources that a consultation on a formal AI safety law “is pending.” release”.

“By releasing consultations and plans piecemeal, the UK misses the opportunity to fully understand the direction of its AI economy,” he said, adding that failure to disclose details of new AI safety laws would lead to uncertainty for investors. .

Still, some in the UK tech community believe a looser, more flexible approach to AI regulation may be right.

Russ Shaw, founder of the advocacy group Tech London Advocates, told CNBC: “It’s clear from recent discussions with the government that there is a lot of effort being put into AI safeguards.”

He added that the UK was well positioned to take a “third way” in AI safety and regulation – “sector-specific” regulation for different sectors such as financial services and healthcare.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *