OpenAI recently announced its ambitious plan to develop a new artificial intelligence model, which the company believes will bring it closer to creating systems that surpass human intelligence. This development was shared in a press release on Tuesday, highlighting not only the technological advancement but also the formation of a new Safety and Security Committee to oversee critical safety and security decisions. This committee, led by CEO Sam Altman along with other board members, marks a strategic move to bolster the governance of AI safety standards.
❗EXCLUSIVE: "We learned about ChatGPT on Twitter."
What REALLY happened at OpenAI? Former board member Helen Toner breaks her silence with shocking new details about Sam Altman's firing. Hear the exclusive, untold story on The TED AI Show.
Here's just a sneak peek: pic.twitter.com/7hXHcZTP9e
— Bilawal Sidhu (@bilawalsidhu) May 28, 2024
The creation of this committee comes at a critical time for OpenAI, which has experienced significant internal upheaval. A team dedicated to researching existential risks of AI, led by Ilya Sutskever and Jan Leike, was recently disbanded. This move has sparked controversy, with former board members expressing concerns in a recent publication about the implications of these changes for the company’s self-governance. They highlighted a growing disconnect, particularly with the return of Altman to the helm, which some believe has fostered a less transparent environment. The press release reflects an understanding of these concerns, stating, “We welcome a robust debate at this important moment.”
Furthermore, several safety-focused employees have left the company, voicing the need for greater emphasis on the potential dangers associated with advanced AI systems. In response, the newly formed Safety and Security Committee’s initial task is to reassess and strengthen OpenAI’s processes and safeguards. This 90-day review is aimed at addressing these internal and external concerns, although the inclusion of Altman in the committee has been met with skepticism from some critics, who question his leadership style and the overall culture within the company.
🌎 El Presidente Javier Milei se reunirá con los empresarios más importantes del mundo en Estados Unidos.
+ Mark Zuckerberg
+ Sundar Pichai, CEO de Google
+ Sam Altman CEO de OpenAI
Finalmente se reunirá con Nayib Bukele.pic.twitter.com/zhuCH06Fgo— Javier Milei Revelación ⚛️ (@MileiRevelacion) May 28, 2024
In addition to Altman, the Safety and Security Committee includes Bret Taylor as Chair, board members Adam D’Angelo and Nicole Seligman, and new chief scientist Jakub Pachocki. Other notable figures in the committee are department heads Matt Knight, John Sculman, Lilian Weng, and Aleksander Madry. The committee plans to engage with outside experts and provide recommendations on safety protocols to the broader OpenAI board.
The announcement coincided with the release of GPT-4 Omni, a new AI model capable of real-time multimodal interactions, which has significantly impressed the AI community. This model represents a leap forward in how humans interact with AI technologies, underlining OpenAI’s commitment to leading in this rapidly evolving field. Amidst this progress, OpenAI faces competitive pressure, particularly from Google, as both vie for a potential partnership with Apple expected to be announced at the upcoming WWDC in June.
Despite these advancements, the new safety team, primarily comprised of members appointed after Altman’s return and new department heads who have risen in a reshuffled leadership structure, faces criticism. The composition of this team, largely insiders, raises questions about its effectiveness in addressing the broader safety concerns that have been a growing point of contention within and outside of OpenAI.
Major Points
- OpenAI announced a new AI model aimed at surpassing human intelligence and formed a Safety and Security Committee led by CEO Sam Altman.
- The company has faced internal challenges, including the disbandment of a team researching AI risks and the departure of safety-focused employees.
- Critics, including former board members, have expressed concerns about transparency and leadership under Altman, citing a possible erosion of governance standards.
- The Safety and Security Committee’s immediate goal is to review and enhance OpenAI’s safety protocols over the next 90 days, amidst skepticism about its effectiveness.
- Concurrently, OpenAI released GPT-4 Omni, a groundbreaking AI model, and is competing with Google for a significant partnership with Apple.
More on OpenAI… you might want to know
OpenAI is SERIOUSLY Concerning Me…
Full Press Release by OpenAI
Today, the OpenAI Board formed a Safety and Security Committee led by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations.
OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.
A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.
OpenAI technical and policy experts Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist) will also be on the committee.
Additionally, OpenAI will retain and consult with other safety, security, and technical experts to support this work, including former cybersecurity officials, Rob Joyce, who advises OpenAI on security, and John Carlin.
Fallon Jacobson – Reprinted with permission of Whatfinger News