Salesforce UK and Ireland CEO Zahra Bahrololoumi speaks at the company’s annual Dreamforce conference on September 17, 2024 in San Francisco, California.
David Paul Morris | David Paul Morris Bloomberg | Getty Images
London—UK CEO salesperson He wants a Labor government to regulate artificial intelligence, but he said it was important for policymakers not to treat all technology companies developing artificial intelligence systems equally.
Salesforce UK and Ireland chief executive Zahra Bahrololoumi told CNBC in London that the US enterprise software giant takes all legislation “seriously”. However, she added that any UK proposals to regulate artificial intelligence should be “proportionate and tailored”.
Bahrololoumi pointed out that there are differences between companies developing consumer-oriented artificial intelligence tools, such as OpenAI, and companies such as Salesforce developing enterprise artificial intelligence systems. She said consumer-facing AI systems, such as ChatGPT, face fewer restrictions than enterprise-level products, which must meet higher privacy standards and adhere to corporate guidelines.
“What we are looking for is legislation that is targeted, proportionate and tailored,” Barololomi told CNBC on Wednesday.
“There are definitely differences between those organizations that operate with consumer-facing technologies and consumer technologies and those that operate with enterprise technologies. We each play a different role in the ecosystem, (but) we are a B2B organization, ” she said.
A spokesman for the UK Department of Science, Innovation and Technology (DSIT) said the planned AI rules would be “highly targeted at the small number of companies developing the most powerful AI models” rather than “imposing blanket rules on the use of AI”.
This suggests that these rules may not apply to companies like Salesforce because they don’t create their own underlying models like OpenAI does.
A DSIT spokesperson added: “We recognize the power of AI in driving growth and increasing productivity and are absolutely committed to supporting the growth of the AI industry, particularly as we accelerate the adoption of this technology across the economy.”
Data security
Salesforce has been heavily touting the ethics and safety considerations embedded in its Agentforce AI technology platform, which allows enterprise organizations to spin up their own AI “agents” — essentially, autonomous digital workers that carry out tasks for different functions, like sales, service or marketing.
For example, a feature called “zero retention” means that no customer data can be stored outside of Salesforce. As a result, generative AI prompts and output are not stored in Salesforce’s large language models, which form the basis of today’s genAI chatbots such as ChatGPT.
For consumer AI chatbots like ChatGPT, Anthropic’s Claude or Meta’s AI Assistant, it’s not clear what data is used to train them or where that data is stored, Bahrololoumi said.
“In order to train these models, you need a lot of data,” she told CNBC. “So with things like ChatGPT and these consumer models, you don’t know what it’s using.”
Bahrololoumi quoted one person as saying that even Microsoft Copilot sold to enterprise customers faces higher risks. Gartner Report Criticizing the tech giant’s artificial intelligence personal assistant for the security risks it poses to organizations.
OpenAI and Microsoft did not immediately provide comment when contacted by CNBC.
Artificial Intelligence concerns ‘apply at every level’
Bola Rotibi, director of enterprise research at analytics firm CCS Insight, told CNBC that while enterprise-focused AI vendors “are more aware of the enterprise-level requirements around security and data privacy,” assuming regulators It would be a mistake not to review it.
“All of the concerns around consent, privacy, transparency, data sovereignty, etc. apply at every level, whether it’s consumers or businesses, as these details are governed by regulations like GDPR,” Rotibi told CNBC via email. ” The GDPR, or General Data Protection Regulation, became law in the UK in 2018.
However, Rotibi said regulators may feel “more confident” in the AI compliance measures taken by enterprise application vendors such as Salesforce “because they understand what it means to provide enterprise-grade solutions and management support.”
She added: “There may be a more nuanced review process for AI services provided by widely deployed enterprise solution providers such as Salesforce.”
Bahrololoumi spoke to CNBC at Salesforce’s Agentforce World Tour in London, an event designed to promote the use of the company’s new “agent” artificial intelligence technology by partners and customers.
Her comments came after Labor, led by Prime Minister Keir Starmer, did not table an artificial intelligence bill in the King’s Speech, which was written by the government outlining its priorities for the coming months. . The government said at the time it planned to introduce “appropriate legislation” for artificial intelligence, but gave no further details.