OpenAI is disbanding its “AGI Readiness” team, which advised the company on its own ability to handle increasingly powerful artificial intelligence and the world’s readiness to govern the technology, its leader said.
On Wednesday, AGI Readiness senior consultant Miles Brundage via substacked posts. He wrote that his main reasons were that the opportunity cost had become too high, he thought his research would be more impactful externally, he wanted to reduce bias, and he had accomplished the goals he set out to do at OpenAI.
Brundage also wrote that in terms of OpenAI and the world’s readiness for AGI, “Neither OpenAI nor any other cutting-edge lab is ready, and neither is the world.” Brundage plans to start his own nonprofit , or join an existing nonprofit focused on AI policy research and advocacy. He added, “AI cannot be as safe and beneficial as it can be without a concerted effort.”
Former AGI readiness team members will be reassigned to other teams, the post said.
“We fully support Myers’ decision to conduct policy research outside of industry and are deeply grateful for his contributions,” an OpenAI spokesperson told CNBC. “His plans to fully commit to independent research on artificial intelligence policy give him the opportunity Making an impact on a wider scale, we are excited to learn from his work and track its impact. We are confident that in his new role Miles will continue to raise the bar for quality in industry and government decision-making.
In May, OpenAI decided to disband its Superalignment team, which focused on the long-term risks of artificial intelligence, a person familiar with the matter confirmed to CNBC at the time, just a year after announcing the team.
News of the disbanding of the AGI Readiness team emerged on the same day that three executives — chief technology officer Mira Murati, director of research Bob McGrew and vice president of research Barret Zoph — came out on the same day that OpenAI’s board of directors may be planning to restructure the company into a for-profit enterprise. Announced resignation.
In early October, OpenAI completed a buzzy funding round that valued it at $157 billion, including $6.6 billion the company raised from a host of investment firms and big tech companies. It also secured a $4 billion revolving credit facility, bringing its total liquidity to more than $10 billion. CNBC confirmed to a source familiar with the matter last month that the company expects to lose about $5 billion on revenue of $3.7 billion this year.
In September, OpenAI announced that its Safety and Security Committee, which the company established in May to handle disputes over its security processes, would become an independent board oversight committee. It recently concluded a 90-day review to assess OpenAI’s processes and safeguards before making recommendations to the board, and the findings were released publicly Blog article.
The news of executive departures and board changes comes amid mounting security concerns and controversy surrounding OpenAI this summer, which, along with Google, MicrosoftMeta, and others are leading a generative AI arms race—a market expected to Up to $1 trillion Revenues in a Decade – Companies in seemingly every industry are scrambling to add AI-powered chatbots and agents to avoid being left behind by competitors.
In July, OpenAI reassigned Aleksander Madry, one of OpenAI’s senior security managers, to a job focused on artificial intelligence inference, people familiar with the matter confirmed to CNBC at the time.
According to Madry’s profile on the Princeton University AI Project website, Madry is the leader of OpenAI preparations, a team “responsible for tracking, assessing, predicting and helping to prevent catastrophic risks associated with cutting-edge AI models.” OpenAI told CNBC at the time that Madry would still be doing core AI security work in his new role.
At the same time as the decision to reassign Madry was made, Democratic senators sent a letter to OpenAI CEO Sam Altman regarding “questions about how OpenAI addresses emerging security issues.”
The letter, seen by CNBC, also said: “We sought additional information from OpenAI about the steps the company is taking to meet its public security commitments, how the company internally assesses its progress against those commitments, and what the company identifies as and mitigating cybersecurity threats.
Microsoft gave up its observer seat on OpenAI’s board of directors in July and said in a letter seen by CNBC that it can withdraw now because it is satisfied with the structure of the startup’s board since the uprising that led to OpenAI’s brief ouster. Since then, the company’s board of directors has been reorganized.
But in June, a group of current and former OpenAI employees published an article open letter Describes concerns about the rapid growth of the artificial intelligence industry despite a lack of oversight and whistleblower protections for those willing to speak out.
“AI companies have strong financial incentives to avoid effective oversight, and we believe customized corporate governance structures are insufficient to change this,” employees wrote at the time.
Days after the letter was published, a source familiar with the situation confirmed to CNBC that the Federal Trade Commission and the Department of Justice will launch antitrust investigations into OpenAI, Microsoft, and Nvidia, focusing on the conduct of these companies.
FTC Chairman Lina Khan described her agency’s action as “a market investigation into emerging investments and partnerships between artificial intelligence developers and major cloud service providers.”
Current and former employees wrote in the June letter that AI companies have “a wealth of non-public information” about what their technology can do, the extent of security measures they take and the harm the technology poses to different types of risk levels.
“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only a tenuous obligation to share some of this information with governments and none with civil society. We do not believe they can all Reliable” shared voluntarily. “
OpenAI’s Superalignment team, declare The organization, which disbanded last May, focused on “scientific and technological breakthroughs to guide and control artificial intelligence systems that are smarter than us.” At the time, OpenAI said it would devote 20% of its computing power to the initiative within four years.
The team was led by OpenAI co-founders Ilya Sutskever and Jan Leike, who disbanded in May after announcing they were leaving the startup. Leike wrote in a post on X that OpenAI’s “safety culture and processes have given way to shiny products.”
Ultraman says On X at the time, he was sad to see Lake leave and that OpenAI had more work to do. Soon after, co-founder Greg Brockman release A statement from Brockman and X’s CEO said the company “increases awareness of the risks and opportunities of AGI so that the world can be better prepared for them.”
“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike write on X then. “However, for quite some time I had been at odds with OpenAI leadership regarding the company’s core priorities, until we finally reached a breaking point.”
Lake wrote that he believes the company’s bandwidth should focus more on security, surveillance, preparedness, safety and social impact.
“These problems are difficult to solve and I fear we will not achieve this goal,” he wrote at the time. “My team has been sailing against the wind over the past few months. At times we have struggled for (computing resources) and it has become increasingly difficult to complete this important research.”
Leike added that OpenAI must become a “safety-first AGI company.”
“Building machines smarter than humans is inherently dangerous work,” he wrote on Give way to the shining field.