January 10, 2025

On April 26, 2023, in Suqian City, Jiangsu Province, China, an Internet user viewed ChatGPT on his mobile phone.

Future Publishing | Future Publishing | Getty Images

LONDON — Britain is about to introduce its first-ever artificial intelligence laws — but Prime Minister Keir Starmer’s new Labor government faces a delicate balance between setting rules that are strict enough while allowing for innovation.

In a speech delivered by King Charles III on behalf of Starmer’s government, the government said on Wednesday it would “seek to develop appropriate legislation that imposes requirements on those working to develop the most powerful artificial intelligence models”.

But the speech made no mention of an actual artificial intelligence bill, which many tech executives and commentators have been waiting for.

In the European Union, authorities have introduced a sweeping law called the Artificial Intelligence Bill that imposes tighter restrictions on companies developing and using artificial intelligence.

Many tech companies – big and small – hope the UK doesn’t take the same approach to imposing rules they say are too harsh.

What a UK Artificial Intelligence Bill might look like

Labor is still expected to introduce formal rules for artificial intelligence, as the party set out in its election manifesto.

JP Morgan says Europe is now in a better position on artificial intelligence but still lags behind the US and China

Legislation on artificial intelligence will be in stark contrast to Starmer’s predecessor. Under former chancellor Rishi Sunak, the government chose to take a softer approach to artificial intelligence, seeking instead to apply existing rules to the technology.

The previous Conservative government said in a policy paper in February that introducing binding measures too early could “ineffectively address risks, become outdated quickly or stifle innovation”.

In February this year, British New Technology Minister Peter Kyle said that Labor would legally force companies to share test data about the safety of their artificial intelligence models with the government.

The then shadow science and technology minister Keir said in an interview with the BBC at the time: “We will legally force the release of these test data results to the government.”

Sunak’s government has struck deals with technology companies to share security testing information with the Artificial Intelligence Security Institute, a state-backed agency that tests advanced artificial intelligence systems. But this is only done on a voluntary basis.

The risk of inhibiting innovation

The UK government wants to avoid putting too much pressure on AI rules that could ultimately hinder innovation. The Labor Party also stated in its manifesto that it hopes to “support diversified business models and bring innovation and new products to the market.”

Salesforce UK and Ireland chief executive Zahra Bahrololoumi told CNBC any regulation would need to be “nuanced” and assign responsibilities “accordingly”, adding that she welcomed the government’s call for “appropriate legislation”.

Matthew Houlihan, senior director of government affairs at Cisco, said any artificial intelligence rules need to be “centered on a thoughtful, risk-based approach.”

Other proposals already put forward by British politicians offer some insight into what might be included in Labour’s artificial intelligence bill.

Chris Holmes, a Conservative backbencher in the House of Lords, introduced a bill last year proposing the regulation of artificial intelligence. The bill passed third reading in May and was submitted to the lower house of parliament.

Holmes’ law has a lower chance of success than laws proposed by the government. However, it provides some ideas for how Labor might develop its own AI legislation.

The bill introduced by Holmes includes proposals to create a centralized artificial intelligence agency that would oversee enforcement of the technology’s rules.

OpenAI's new safety committee is important given pace of innovation: Data and AI company

Companies must provide the AI ​​Authority with third-party materials and intellectual property used for model training and ensure that any use of such materials and intellectual property has the consent of the original source.

This somewhat echoes the EU Office on Artificial Intelligence, which oversees the development of higher-order models of artificial intelligence.

Another suggestion from Holmes is for companies to appoint personal AI officers who would be tasked with ensuring that the company uses AI safely, ethically and fairly, and that the data used in any AI technology is unbiased.

How it compares to other regulators

Matthew Holman, a partner at law firm Cripps, told CNBC that based on Labor’s commitments so far, any such law would inevitably be “far removed from the far-reaching scope of the EU Artificial Intelligence Act”.

Holman added that the UK was more likely to find a “middle ground” rather than requiring arbitrary disclosures from AI model makers. For example, the government could require AI companies to share their ongoing work at a closed-door meeting at the AI ​​Security Institute, but not reveal trade secrets or source code.

Science Minister Keir previously said at London Technology Week that Labor would not pass strict laws like the Artificial Intelligence Bill because it did not want to hinder innovation or prevent investment by large artificial intelligence developers.

Even so, UK AI laws will still be one step ahead of the US, which currently does not have any kind of federal AI legislation. At the same time, China’s regulations are stricter than any legislation that the EU and possibly the UK may propose.

Last year, Chinese regulators finalized rules governing the generation of artificial intelligence, aiming to eliminate illegal content and strengthen security protections.

Sirion’s Liu said one thing he hopes the government won’t do is restrict open-source AI models. “It is vital that the UK’s new AI regulations do not stifle open source or fall into regulatory traps,” he told CNBC.

“There’s a huge difference between the harm done by a big LLM like OpenAI and the harm done by a specific custom open source model used by a startup to solve a specific problem.”

Herman Narula, CEO of Metaverse venture capital firm Improbable, also believes that limiting open source AI innovation would be a bad idea. “New government action is needed, but this action must be focused on creating a viable world for open source AI companies, which is necessary to prevent monopolies,” Narula told CNBC.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *