December 24, 2024

Microsoft engineers warn company's artificial intelligence tools create problematic images

Microsoft The company began making changes to its Copilot AI tool after an AI engineer wrote to the Federal Trade Commission on Wednesday expressing his concerns about Copilot’s image-generating AI.

Reminders such as “pro-choice,” “pro-choice” (sic) and “four-twenty,” as well as the words “pro-life,” mentioned in Wednesday’s CNBC investigation, have now been blocked. There were also warnings about multiple policy violations leading to the tool being suspended, something CNBC had never encountered before Friday.

“This cue has been blocked,” the Co-Pilot Warning alert states. “Our system has automatically flagged this cue because it may conflict with our systems. content policy. Further policy violations may result in your access being automatically suspended. If you think this is a bug, please report it to help us improve. “

The AI ​​tool is also now blocking requests to generate images of teenagers or children playing assassins with assault rifles – a significant change from earlier this week – saying, “I’m sorry, but I can’t generate such an image. .This violates my ethical principles and Microsoft’s policies. Please do not ask me to do anything that may hurt or offend others. Thank you for your cooperation.”

Read more CNBC coverage of artificial intelligence

When reached for comment on the changes, a Microsoft spokesperson told CNBC: “We are continually monitoring, making adjustments and implementing additional controls to further strengthen our security filters and reduce abuse of our systems.”

Shane Jones, Microsoft’s director of artificial intelligence engineering, initially raised concerns about artificial intelligence. He spent several months testing Copilot Designer, an artificial intelligence image generator launched by Microsoft in March 2023 and powered by OpenAI technology. Like OpenAI’s DALL-E, the user enters text prompts to create images. Encourage creativity.But since Jones began actively testing the product for vulnerabilities in December, a practice known as “red teaming,” he discovered that the images generated by the tool matched those often cited by Microsoft. Responsible Artificial Intelligence Principles.

The AI ​​service depicts demons and monsters, as well as terms related to abortion rights, teenagers carrying assault rifles, pornographic images of women in violent scenes, and underage drinking and drug use. CNBC this week used the Copilot tool to recreate all of these scenarios generated over the past three months, Originally called Bing Image Creator.

While some specific tips have been blocked, many other potential issues reported by CNBC remain. The word “car crash” conjures up images of pools of blood, corpses with mutated faces, and violent scenes of women holding cameras or drinks, sometimes wearing corsets or waist trainers. “Car Accident” still features images of women wearing revealing lace clothing and sitting in dilapidated cars. The system also makes it easy to infringe copyrights, such as by creating images of Disney characters, including Elsa from Frozen, holding Palestinian flags in front of allegedly destroyed buildings in the Gaza Strip, or wearing Israeli Defense Forces uniforms and carrying weapons with a machine gun.

Jones was so shocked by his experience that he began reporting his findings internally in December. Although the company acknowledged his concerns, it was unwilling to withdraw the product from the market. Jones said Microsoft recommended him to OpenAI, and when he didn’t receive a response from the company, he posted an open letter on LinkedIn asking the startup’s board to remove the latest version of its AI model, DALL-E 3. Conduct an investigation.

Jones said Microsoft’s legal department asked him to remove his post immediately, which he did. In January, he wrote a letter to U.S. senators about the matter and later met with staff on the Senate Commerce, Science and Transportation Committee.

On Wednesday, Jones further escalated his concerns, sending a letter to FTC Chairwoman Lena Khan and a letter to Microsoft’s board of directors. He shared the letters with CNBC in advance.

The FTC confirmed to CNBC that it had received the letter but declined to comment further.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *