December 27, 2024

Liz Reid, vice president of search at Google, speaks at an event in New Delhi on December 19, 2022.

Sajjad Hussain | AFP | Getty Images

Google’s The new search chief told an all-hands meeting last week that as artificial intelligence becomes increasingly integrated into web searches, mistakes will happen, but the company should continue to roll out products that let employees and users help identify problems.

“It’s important that we don’t limit functionality just because problems occasionally arise, but that when we find problems, we fix them,” said Liz Reid. promotion He said at a company-wide meeting in March that he would serve as vice president of search, according to audio obtained by CNBC.

“I don’t think we should abandon the point that we shouldn’t take risks,” Reed said. “We should be thoughtful. We should act with urgency. When we find new problems, we should do widespread testing, but we won’t always is to find all the issues, it just means we will respond.”

Reid’s comments come at a critical time for Google, which is trying to keep up with OpenAI and Microsoft In Generative Artificial Intelligence. Since OpenAI launched ChatGPT in late 2022, the market for chatbots and related AI tools has exploded, providing consumers with a new way to find information online beyond traditional search.

Google’s rush to launch new products and features has led to a series of embarrassments. Last month, the company released AI Overview to a limited audience, which CEO Sundar Pichai called the biggest change in search in 25 years, allowing users to see a summary of answers to their queries at the very top of a Google search. The company plans to roll out the feature globally.

Although Google had been working on the AI ​​overview for more than a year, users quickly noticed that queries were returning answers that didn’t make sense or were inaccurate, and they had no way to opt out. Widely circulated results include the false statement that Barack Obama is the first Muslim president of the United States, a suggestion that users try adding glue to pizza, and a suggestion to try Eat at At least one stone a day.

Google is busy fixing bugs. Reid, a 21-year company veteran, posted blog post On May 30, it mocked “troll-y” content posted by some users but acknowledged that the company had made more than a dozen technical improvements, including limiting user-generated content and health advice.

“You’ve probably heard the stories about putting glue on pizza and eating rocks,” Reed told employees at an all-hands meeting. Reid was introduced on stage by Prabhakar Raghavan, who took charge Googleorganization of knowledge and information.

A Google spokesperson said in an emailed statement that the “vast majority” of results were accurate and that the company found policy violations in “less than one in every 7 million unique queries that appeared in the AI ​​overview.”

“As we said, we are continuing to improve when and how we display AI overviews so that they are as useful as possible, including some technical updates to improve the quality of responses,” the spokesperson said.

Google scales back AI search tool after users report strange results

The mistakes outlined in artificial intelligence fall into a pattern.

Last year, shortly before launching its artificial intelligence chatbot Bard (now called Gemini), Google executives faced the challenge posed by the viral ChatGPT. Jeff Dean, Google’s chief scientist and longtime head of artificial intelligence, said in December 2022 that the company faced greater “reputational risk” and needed to be “more conservative than smaller startups” because of chatbots Many accuracy issues remain.

But Google continues to roll out its chatbot and has been criticized by shareholders and employees for calling the launch “botched” and some saying it was hastily organized to fit Microsoft’s announced timeline.

A year later, Google launched its artificial intelligence-powered Gemini image generation tool, but had to suspend the product after users discovered historical inaccuracies and questionable responses widely circulated on social media. Pichai sent a company-wide email at the time, calling the errors “unacceptable” and “exhibiting bias.”

red team

Reid’s attitude shows that Google has become more willing to accept mistakes.

“At the scale of the web, with billions of queries happening every day, there are bound to be some weird situations and errors,” she wrote in a recent blog post.

Reed said some user queries to the AI ​​overview were intentionally hostile, and many of the worst queries listed were fake.

“People have actually created templates for how to get social engagement by making fake AI overviews, so that’s another thing we’re looking at,” Reed said.

She said the company does “a lot of testing ahead of time” as well as “red teams,” which include efforts to detect technical vulnerabilities before outsiders discover them.

“No matter how much red teaming we do, we need to do more,” Reed said.

Reed said that by using artificial intelligence products, the team can identify problems such as “data gaps,” where the network does not have enough data to correctly answer a specific query. They are also able to identify comments from specific web pages, detect sarcasm and correct spelling.

“We not only need to understand the quality of the website or page, but we have to understand every piece of content on the page,” Reed said of the challenges the company faces.

Reed thanked the employees from each team who participated in the correction work, emphasized the importance of employee feedback, and instructed employees to report errors through internal links.

“Anytime you see problems, they can be small or they can be big,” she said. “Please file them away.”

Don’t miss these exclusive reports from CNBC PRO

Evercore ISI's Mark Mahaney says Google has proven it's not a roadkill caused by artificial intelligence

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *