“I have a concern with companies like Google, Gemini, OpenAI & Meta that they are not maximally truth seeking. Their A.I. are pandering to political correctness and are being trained to lie. The safest thing for AI is to be maximally truth seeking even if the truth is unpopular” – Elon Musk see clip below
“I have a concern with companies like Google, Gemini, OpenAI & Meta that they are not maximally truth seeking. Their A.I. are pandering to political correctness and are being trained to lie.
The safest thing for AI is to be maximally truth seeking even if the truth is unpopular” pic.twitter.com/h9aRzu0ArE
— DogeDesigner (@cb_doge) May 23, 2024
OpenAI has announced the formation of a new safety panel aimed at guiding the company on crucial safety and security decisions. This announcement was made through a blog post on Tuesday. The panel, led by CEO Sam Altman, includes board members Adam D’Angelo, Nicole Seligman, and Bret Taylor, who will also act as the chair of the group.
The initial responsibility of this committee will be to assess and enhance OpenAI’s current safety processes and measures over a period of 90 days. Following this evaluation period, the committee is expected to submit their recommendations to the board. Once the board has reviewed these recommendations, they will be publicly shared as an update.
This move reflects OpenAI’s heightened focus on AI safety, especially following the departure of Jan Leike, a former researcher at the company. Leike resigned in March, criticizing the company for prioritizing product development over safety concerns.
In addition to forming this committee, OpenAI revealed that it has begun training its latest AI model, which it describes as the “next frontier model.” The company anticipates that these new systems will elevate its capabilities to a new level.
ASI Inevitable – Token Merger Set
Save the date – June 13th 2024
You are all invited to join @SingularityNET @Fetch_ai and @oceanprotocol in becoming the founding members of the Artificial Superintelligence Alliance
AI must be decentralized, open and beneficial to all… pic.twitter.com/0i7zudmP4c
— Artificial Superintelligence Alliance (@ASI_Alliance) May 29, 2024
OpenAI expressed pride in its ability to develop and release models that lead the industry not only in capabilities but also in safety. The company also emphasized its openness to engaging in meaningful discussions about these critical issues at this significant juncture.
Furthermore, the newly established committee will include a diverse group of policy and technical experts. OpenAI also plans to consult with former cybersecurity officials to strengthen the committee’s expertise and approach to addressing AI safety and security. This strategy underscores the company’s commitment to advancing AI technology responsibly and securely.
xAI is just a ten-month-old company, and its valuation is already $24 billion.
I wouldn’t be surprised if it surpasses OpenAI’s valuation in the next few months. pic.twitter.com/Ch3hIwk3ST
— DogeDesigner (@cb_doge) May 29, 2024
Major Points
- OpenAI has established a new safety committee led by CEO Sam Altman and board members Adam D’Angelo, Nicole Seligman, and Bret Taylor.
- The committee’s first task is to review and enhance OpenAI’s safety processes over the next 90 days, with findings to be shared publicly.
- This development follows criticism from former researcher Jan Leike, who resigned citing safety concerns overshadowed by product development.
- OpenAI is also training a new “next frontier model” expected to significantly advance its capabilities.
- The panel includes diverse experts from policy and technical fields and will consult with former cybersecurity officials to bolster safety measures.
RM Tomi – Reprinted with permission of Whatfinger News