January 7, 2025

On November 19, 2024, US President-elect Donald Trump and Elon Musk watched the sixth test flight of the SpaceX Starship rocket in Brownsville, Texas.

Brandon Bell | Reuters

The U.S. political landscape is set to undergo some changes in 2025—and these changes will have some significant implications for the regulation of artificial intelligence.

President-elect Donald Trump will be inaugurated on January 20. of the business community’s top advisors who anticipate the impact surrounding emerging technologies such as artificial intelligence and cryptocurrencies.

Across the Atlantic, a tale of two jurisdictions emerged, with the UK and EU at odds over regulatory thinking. While the EU has taken a tougher approach to the Silicon Valley giants behind the most powerful artificial intelligence systems, the UK has taken a softer approach.

2025 could see a major overhaul of the global AI regulatory landscape. CNBC explores some of the key developments to watch — from the evolution of the EU’s landmark AI bill to what the Trump administration can do for the U.S.

Musk’s U.S. policy influence

On December 5, 2024, Musk walked on Capitol Hill when meeting with elected Senate Republican leader John Thune (R-SD) in Washington, the United States.

Benoit Tessier | Reuters

There has been no confirmation of Trump’s plans in terms of possible presidential directives or executive orders. But Calkins believes Musk is likely to offer suggestions to ensure that the development of artificial intelligence does not endanger civilization – a risk he has repeatedly warned of in the past.

“There’s no question that he’s unwilling to allow artificial intelligence to have catastrophic consequences for humanity — he’s certainly worried about it and has been talking about it long before he took on policy positions,” Calkins told CNBC.

Currently, there is no comprehensive federal artificial intelligence legislation in the United States.

EU Artificial Intelligence Act

The EU is so far the only jurisdiction in the world to promote comprehensive rules on artificial intelligence through the Artificial Intelligence Act.

Jacques Silva | Noor Photos | Getty Images

To date, the EU is the only jurisdiction in the world to promote comprehensive statutory rules for the artificial intelligence industry. Earlier this year, the EU’s Artificial Intelligence Act – the first regulatory framework for AI of its kind – Officially effective.

The law has not fully taken effect yet, but it has already created tensions among large U.S. tech companies, who worry that some aspects of the regulations are too strict and could stifle innovation.

In December last year, the European Union Office of Artificial Intelligence (a newly established model oversight agency under the Artificial Intelligence Law) released the second draft of practice guidelines for general artificial intelligence (GPAI) models, which mentioned systems such as OpenAI’s GPT series of large language models , or an LL.M.

The second draft includes exemptions for certain open source AI model providers. Such models are often made available to the public to allow developers to build their own custom versions. It also requires developers of “systemic” GPAI models to undergo rigorous risk assessments.

Computer and Communications Industry Association – whose members include Amazon, Google and Yuan – warns that it “contains measures that go well beyond the agreed scope of the bill, such as far-reaching copyright measures.”

The AI ​​office had no immediate comment when contacted by CNBC.

It is worth noting that the EU Artificial Intelligence Act is far from full implementation.

As Shelley McKinley, chief legal officer of popular code repository platform GitHub, told CNBC in November, “The next phase of work has begun, which may mean we have more challenges now than we have behind us.”

For example, the first provision of the bill will be enforceable starting in February. The regulations cover “high-risk” artificial intelligence applications such as remote biometrics, loan decisions and education scoring. The draft code for the third version of the GPAI model is scheduled to be released the same month.

European technology leaders are worried that the EU’s punitive measures against US technology companies may trigger a reaction from Trump and lead the EU to soften its approach.

Take antitrust supervision as an example. Andy Yen, CEO of Swiss VPN company Proton, said that the EU has been taking active actions to curb the dominance of US technology giants, but this may lead to a negative reaction from Trump.

“(Trump’s) view is that he may want to personally regulate his technology companies,” Yen told CNBC in an interview at the Internet Summit technology conference in Lisbon, Portugal, in November. “He doesn’t want Europe to get involved.”

UK Copyright Review

British Prime Minister Keir Starmer was interviewed by the media while attending the 79th Session of the United Nations General Assembly at the United Nations Headquarters in New York, USA on September 25, 2024.

Lionel | Reuters

One of the countries to watch is the United Kingdom. Previously, the United Kingdom had The introduction of statutory obligations for manufacturers of artificial intelligence models has been avoided due to concerns that new legislation may be too stringent.

However, Keir Starmer’s government has said it plans to introduce artificial intelligence legislation, but details are currently unclear. There are widespread expectations that the UK will take a more principles-based approach to AI regulation, rather than the EU’s risk-based framework.

Last month, the government dropped its first major indicator of regulatory progress, announcing a consultation on measures to regulate the use of copyright-protected content to train artificial intelligence models. Copyright is a big issue especially for generative AI and LL.M.

Most LL.M.s use public data from the open web to train their AI models. But this often includes examples of artwork and other copyrighted material. Artists and publishers like new york times Claims that these systems unfairly scrape their valuable content without consent to produce the original output.

To address this issue, the UK government is considering making an exception to copyright law for AI model training, while still allowing rights holders to opt out of having their works used for training purposes.

Appian’s Calkins said the UK could eventually become a “global leader” on copyright infringement of AI models, adding that the country “will not be subject to an overwhelming lobbying blitz by domestic AI leaders like the US”.

Sino-U.S. relations may become a point of tension

U.S. President Donald Trump, right, and Chinese President Xi Jinping walk past the People’s Liberation Army during a welcome ceremony outside the Great Hall of the People in Beijing, China, Thursday, Nov. 9, 2017.

Shen Qilai | Bloomberg | Getty Images

Finally, geopolitical tensions between the United States and China are likely to escalate under Trump as governments around the world seek to regulate rapidly growing artificial intelligence systems.

During his first term as president, Trump implemented a number of hawkish policy measures against China, including the decision to place Huawei on a trade blacklist that restricted it from doing business with U.S. technology suppliers. He also launched a campaign to ban TikTok, owned by Chinese company ByteDance, in the U.S. — though he has since softened his stance on TikTok.

China is racing to beat the United States for dominance in artificial intelligence. At the same time, the United States has taken steps to restrict China’s access to key technologies, primarily chips designed by Nvidia, that are needed to train more advanced artificial intelligence models. China responded by trying to build its own local chip industry.

Technology experts worry that geopolitical differences between the United States and China over artificial intelligence could lead to other risks, such as the possibility that one of the two countries could develop artificial intelligence that is smarter than humans.

Max Tegmark, founder of the nonprofit Future of Life Institute, believes the United States and China may one day create a form of artificial intelligence that can improve itself and design new systems without human oversight, which could force both governments to Separately formulate relevant rules for artificial intelligence security.

“My optimistic path forward is for the United States and China to unilaterally impose national security standards to prevent their own companies from causing harm and building uncontrollable general artificial intelligence, not to appease competition,” Tegmark told CNBC in November. Opponent’s superpower, but just to protect yourself.

Governments are already trying to work together to figure out how to create regulations and frameworks around artificial intelligence. In 2023, the UK hosted the Global AI Security Summit, attended by both the US and Chinese governments, to discuss potential safeguards against the technology.

—CNBC’s Arjun Kharpal contributed to this report

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *