December 26, 2024

OpenAI CEO Sam Altman speaks at the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024.

Jason Redmond | AFP | Getty Images

A group of current and former OpenAI employees published an open letter on Tuesday expressing concerns about the rapid growth of the artificial intelligence industry despite a lack of oversight and whistleblower protections for those willing to speak out.

“AI companies have strong financial incentives to avoid effective oversight, and we believe bespoke corporate governance structures will not be sufficient to change this,” the employees said. wrote in an open letter.

Open artificial intelligence, Google, Microsoft, Yuan and other companies are leading a generative AI arms race—a market expected to Up to $1 trillion Revenues in a Decade – Companies in seemingly every industry are scrambling to add AI-powered chatbots and agents to avoid being left behind by competitors.

Current and former employees wrote that AI companies have “a wealth of non-public information” about what their technology can do, the extent of safety measures they take and the level of risk the technology poses for different types of harm.

“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only a tenuous obligation to share some of this information with governments and none with civil society. We do not believe they All reliable” shared voluntarily. “

The letter also details concerns from current and former employees about inadequate whistleblower protections in the artificial intelligence industry, noting that without effective government oversight, employees are in a relatively unique position to hold companies accountable.

“Broad confidentiality agreements prevent us from voicing our concerns except to companies that may be failing to address them,” the signatories wrote. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, and we Many of the risks of concern are not yet regulated.”

The letter asks AI companies to commit to not entering into or enforcing non-disparagement agreements; to establish an anonymous process for current and former employees to express concerns to company boards, regulators and others; to support a culture of public criticism; No retaliation for public reporting.

Four anonymous OpenAI employees and seven former employees, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, signed the letter. Signatories also include Ramana Kumar, who worked at Google DeepMind, and Neel Nanda, who currently works at Google DeepMind and formerly worked at Anthropic. Three computer scientists known for advancing the field of artificial intelligence also signed the letter: Geoffrey Hinton, Yoshua Bengio and Stuart Russell .

“Given the importance of this technology, we agree that rigorous debate is critical and we will continue to engage with governments, civil society and other communities around the world,” an OpenAI spokesperson told CNBC. He added that the company has An anonymous integrity hotline.

Microsoft declined to comment.

OpenAI controversy grows

Ilya Sutskever, a Russian-Israeli-Canadian computer scientist, co-founder and chief scientist of OpenAI, gave a speech at Tel Aviv University in Tel Aviv on June 5, 2023.

Jack Guess | AFP | Getty Images

CEO Sam Altman explain On X day, he was sorry to see Lake leave, the company had more work to do. Soon after, OpenAI co-founder Greg Brockman release A statement from himself and Altman on

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike write on X. “However, for quite some time I had been at odds with OpenAI leadership regarding the company’s core priorities, until we finally reached a breaking point.”

Lake wrote that he believes the company’s bandwidth should focus more on security, surveillance, preparedness, safety and social impact.

“These problems are difficult to solve, and I worry we won’t be able to achieve this goal,” he wrote. “My team has been sailing against the wind over the past few months. At times we have struggled with[computing resources]It is becoming increasingly difficult to complete this important research.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building machines that are smarter than humans is inherently dangerous work,” he wrote. “OpenAI has a huge responsibility on behalf of all of humanity. But over the past few years, safety culture and processes have taken a back seat to shiny products.”

The high-profile departure comes months after OpenAI endured a leadership crisis involving Altman.

OpenAI’s board of directors ousted Altman in November, saying in a statement that Altman “has not been consistently candid in his communications with the board.”

This problem seems to be getting more complex every day, wall street journal and other media reports that Suzkweil is focused on ensuring that artificial intelligence does not harm humans, while others, including Ultraman, are more eager to push the development of new technologies.

Altman’s ouster prompted resignations or threats to resign, an open letter signed by nearly all OpenAI employees, and an outcry from investors including Microsoft. Within a week, Altman was back at the company, and the board members who voted to oust him, Helen Toner, Tasha McCauley and Ilya Sutskever ) then exit the company. Sutskover remained on staff at the time but no longer serves on the board of directors. Adam D’Angelo also voted to oust Altman but remains on the board.

American actress Scarlett Johansson poses for the movie “Asteroid City” at the 2023 Cannes Film Festival. Cannes (France), May 24, 2023

Mondadori Portfolio | Mondadori Portfolio | Getty Images

Meanwhile, last month, OpenAI launched a new artificial intelligence model and a desktop version of ChatGPT, along with updated user interface and messaging features, in the company’s latest effort to expand the use of its popular chatbot. A week after OpenAI launched a series of messaging voices, the company announced it would be removing one of the viral chatbot’s voices called “Sky.”

“Sky” sparked controversy due to its resemblance to the voice of actress Scarlett Johansson in the artificial intelligence film “Her.” The Hollywood star claims OpenAI stole her voice, even though she refused to let them use it.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *