December 26, 2024

The European Union’s landmark artificial intelligence law officially comes into force on Thursday — and it spells difficult changes for U.S. tech giants.

The Artificial Intelligence Act is a landmark rule aimed at regulating the way companies develop, use and apply artificial intelligence, and received final approval from EU member states, legislators and the European Commission, the EU’s executive arm, in May.

CNBC breaks down everything you need to know about the AI ​​Act and how it will impact the world’s largest tech companies.

What is the Artificial Intelligence Act?

The Artificial Intelligence Act is a piece of EU legislation governing artificial intelligence. The law was first proposed by the European Commission in 2020 and aims to address the negative impacts of artificial intelligence.

The regulation sets out a comprehensive and harmonized regulatory framework for artificial intelligence across the EU.

It will primarily target large U.S. technology companies, which are currently the main architects and developers of the most advanced artificial intelligence systems.

However, many other businesses will also be subject to these rules, including even non-tech companies.

Tanguy Van Overstraeten, head of the technology, media and technology practice at Brussels-based law firm Linklaters, said the EU artificial intelligence bill is “the first of its kind in the world.”

“This is likely to impact many businesses, particularly those developing AI systems and those that deploy or only use them in certain circumstances.”

The legislation takes a risk-based approach to regulating artificial intelligence, meaning that different applications of the technology are regulated differently depending on the level of risk they pose to society.

For example, the Artificial Intelligence Bill will introduce strict obligations for AI applications deemed “high risk”. These obligations include appropriate risk assessment and mitigation systems, a high-quality set of training materials to minimize the risk of bias, records of daily activities and detailed documentation of mandatory sharing models with authorities to assess compliance.

Appian CEO says AI revolution is 'hindered by fear'

Examples of high-risk AI systems include self-driving cars, medical devices, loan decision systems, education scoring, and remote biometric systems.

The law also bans any artificial intelligence applications deemed to have an “unacceptable” level of risk.

AI applications that pose unacceptable risks include “social scoring” systems that rank citizens based on the aggregation and analysis of data, predictive policing, and the use of emotion recognition technology in the workplace or school.

What does this mean for U.S. technology companies?

American manufacturers like it Microsoft, Google, Amazon, appleand Yuan Amid the global craze for artificial intelligence technology, they have been actively partnering with companies they believe can lead the way in artificial intelligence and investing billions of dollars in them.

Given the huge computing infrastructure required to train and run artificial intelligence models, cloud platforms such as Microsoft Azure, Amazon Web Services and Google Cloud are also key to supporting artificial intelligence development.

In this regard, large technology companies will undoubtedly be among the hardest hit targets under the new rules.

“The impact of the AI ​​Act reaches far beyond the EU. It applies to any organization with any business or influence in the EU, which means that no matter where you are, the AI ​​Act may apply to you, Charlie Thompson, senior engineer and vice president of Europe, the Middle East and Africa and Latin America at enterprise software company Appian, told CNBC via email.

Thompson added: “This will bring more scrutiny to the tech giants’ operations in the EU market and their use of EU citizens’ data.”

Meta has limited the availability of its AI models in Europe due to regulatory concerns — although the move is not necessarily due to the EU Artificial Intelligence Act.

The Facebook owner said earlier this month it would not make its LLaMa model available in the EU, citing uncertainty over its compliance with the bloc’s General Data Protection Regulation (GDPR).

Capgemini CEO: No 'magic bullet' to benefit from artificial intelligence

The company was previously ordered to stop using Facebook and Instagram posts to train its models in the EU due to concerns about potential GDPR violations.

How to treat generative artificial intelligence?

Generative AI is labeled as an example of “general purpose” AI in the EU Artificial Intelligence Act.

This label refers to tools that are capable of completing a wide range of tasks at a similar level to humans, if not better.

General artificial intelligence models include but are not limited to OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude.

For these systems, the Artificial Intelligence Act imposes strict requirements, such as respecting EU copyright law, transparency in how models are trained, routine testing and adequate cybersecurity protections.

However, not all AI models are treated equally. AI developers say the EU needs to ensure that open source models – which are free to the public and can be used to build customized AI applications – are not subject to overly strict regulation.

Examples of open source models include Meta’s LLaMa, Stability AI’s Stable Diffusion, and Mistral’s 7B.

The EU does provide some exceptions for open source generative AI models.

But to qualify for the exemption, open source providers must disclose their parameters, including weights, model architecture, and model usage, and enable “model access, use, modification, and distribution.”

Under the Artificial Intelligence Act, open source models that pose “systemic” risks are not exempt.

The gap between closed-source and open-source AI companies is smaller than we thought: Hugging Face

“It is necessary to carefully evaluate when the rules are triggered and the role of the relevant stakeholders,” Van Overstraten said.

What happens if a company breaks the rules?

Companies violating the EU Artificial Intelligence Act May be fined Between €35 million (US$41 million) or 7% of global annual revenue, whichever is greater, and €7.5 million (or 1.5% of global annual revenue).

The amount of the penalty will depend on the infringement and the size of the company being fined.

That’s higher than the fines stipulated under Europe’s strict digital privacy law, the GDPR. Companies that violate the GDPR face fines of up to €20 million or 4% of annual global turnover.

Supervisory authority for all artificial intelligence models within the scope of the bill, including general artificial intelligence systems, will be the responsibility of the European Artificial Intelligence Office, a regulatory body established by the European Commission in February 2024.

Jamil Jiva, global head of asset management at fintech company Linedata, told CNBC that the EU “understands that if you want regulation to have an impact, you need to impose huge fines on non-compliant companies.”

Martin Sorrell talks about the future of advertising in the age of artificial intelligence

Jiva added that in a similar way to how the GDPR demonstrated that the EU could “exercise regulatory influence to enforce data privacy best practices” globally, with the AI ​​Bill, the EU is once again trying to replicate this, but For artificial intelligence.

However, it is worth noting that although the Artificial Intelligence Act finally takes effect, most of the provisions of the act will not actually take effect until at least 2026.

Restrictions on general-purpose systems will not begin until 12 months after the Artificial Intelligence Act comes into force.

Currently commercially available generative artificial intelligence systems (such as OpenAI’s ChatGPT and Google’s Gemini) have also been granted a 36-month “transition period” to bring their systems into compliance.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *