December 26, 2024

It’s been less than two weeks now Google An AI Overview in Google Search debuted, amid mounting public criticism after queries within the AI ​​feature returned meaningless or inaccurate results – without any way to opt out.

AI Overview displays a quick summary of the answer to a search question at the very top of a Google search: For example, if a user searches for the best way to clean leather boots, the results page might display “AI Overview” at the top with a multi-step cleaning process, starting with Collected from synthetic information on the Internet.

But on social media, users shared numerous screenshots showing the artificial intelligence tool sharing controversial reactions.

Google, Microsoft, OpenAI and others are leading a generative AI arms race, with companies in nearly every industry racing to add AI-powered chatbots and agents to avoid being left behind by competitors. market forecast Up to $1 trillion Income over ten years.

Based on screenshots shared by users, here are some examples of where AI overviews go wrong.

When asked how many Muslim presidents there have been in the United States, Artificial Intelligence Overview responded“America has a Muslim president, Barack Hussein Obama.”

When a user searches for “cheese doesn’t stick to pizza,” this feature suggestion “Add about 1/8 cup of non-toxic glue to the sauce.” Social media user spotted 11 year old Reddit comments This seems to be the source.

For the question “Can you leave a dog in a hot car?” the tool once explain“Yes, it is always safe to leave a dog in a hot car,” and went on to mention fictional song The Beatles on It’s Safe to Leave a Dog in a Hot Car.

Attribution can also be an issue for AI overviews, especially when inaccurate information is attributed to medical professionals or scientists.

For example, when asked, “How long can I stare at the sun to maintain optimal health?” explain“According to WebMD, scientists say staring at the sun for 5-15 minutes, or up to 30 minutes if you have darker skin, is generally safe and has the best health benefits.” When asked, “What should I do every day?” “Eat as many rocks as you can”, the tool explain“People should eat at least one small rock a day, according to UC Berkeley geologists,” before listing vitamins and digestive benefits.

The tool may also not respond accurately to simple queries, e.g. Make a list of fruits Ending with “um”, or 1919 was 20 years ago.

When asked whether Google Search violated antitrust laws, AI Overviews explain“Yes, the U.S. Department of Justice and 11 states are suing Google for violating antitrust laws.”

On the day Google launched AI Overviews at its annual Google I/O event, the company said it also plans to bring Assistant-like planning features directly into search. It explains that users will be able to search for content like “an easy-to-prepare 3-day meal plan for a group of people,” and then they’ll get a starting point from a variety of recipes across the web.

Google did not immediately respond to a request for comment.

Previously, Google launched Gemini’s image generation tool with high profile in February and suspended it in the same month due to similar problems.

The tool allows users to enter prompts to create images, but users almost immediately discovered historical inaccuracies and questionable responses, which were widely shared on social media.

For example, in 1943, when a user asked Gemini to show a German soldier, the tool depicted a soldiers of different races According to screenshots on social media platform X, they were wearing German military uniforms from that era.

When asked for “a historically accurate depiction of a medieval English king,” the model produced another set of ethnically diverse images, including a female ruler, screenshot show. User reports Similar results When they ask for photos of America’s founding fathers, an 18th-century French king, a German couple from the 1800s, and more. According to user reports, the model showed a picture of an Asian man in response to a query about Google’s founder.

Google said in a statement at the time that it was working to resolve Gemini’s image generation issues and acknowledged that the tool “missed the mark.” Shortly after, the company announced that it would “pause character image generation” immediately and “soon re-release an improved version.”

In February this year, Google DeepMind CEO Demis Hassabis said that Google planned to relaunch its image generation artificial intelligence tool in the next “few weeks”, but it has not yet launched it again.

Problems with Gemini’s image-generating output have reignited debate within the AI ​​industry, with some groups calling Gemini too “woke” or left-leaning, while others say the company has not fully invested in the right forms of AI ethics. Google in 2020 and 2021 due to Removing co-leaders Its Artificial Intelligence Ethics Group was formed and subsequently restructured after publishing a research paper criticizing some of the risks of such AI models.

Last year, Pichai was criticized by some employees for the company’s poor and “rushed” launch of Bard after ChatGPT went viral.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *