源 稿 窗
在文章中双击或划词查词典
字号 +
字号 -
折叠显示
全文显示
ChatGPT developer OpenAI announced Tuesday it is establishing a safety committee as it trains its latest artificial intelligence model, the GPT-4 system of its chatbot.
The committee will include OpenAI CEO Sam Altman, board members and other executives. The company said the body will spend the next 90 days strengthening OpenAI's processes and safeguarding advanced AI development from potential misuse and exploitation.
The committee will make recommendations to the full board on "critical safety and security decisions for OpenAI projects and operations," the company said in a statement.
The announcement comes weeks after key executives departed the company.
Researcher Jan Leike, who resigned from OpenAI earlier this month, said the company's "safety culture and processes have taken a back seat to shiny products."
Ilya Sutskever, OpenAI co-founder and chief scientist, also resigned. "I'm confident that OpenAI will build AGI [artificial general intelligence] that is both safe and beneficial," he said on the social media site X, formerly Twitter.
Leike and Sutskever jointly led the company's "superalignment" team, dedicated to reducing long-term AI risks, which disbanded after their departures.
OpenAI has faced backlash over allegations that a voice for ChatGPT copied that of actress Scarlett Johansson. The company denied trying to impersonate Johansson.
The new committee will publicly release its recommendations for the company following its meeting with the full board in the fall.
"We welcome a robust debate at this important juncture," OpenAI's statement said.
Some information for this report was provided by The Associated Press and Agence France-Presse.
The Voice of America provides news and information in more than 40 languages to an estimated weekly audience of over 326 million people. Stories with the VOA News byline are the work of multiple VOA journalists and may contain information from wire service reports.