CEO Sam Altman also tweeted to reassure the company’s commitment to AI safety
Responding to questions from U.S. lawmakers, OpenAI said it is committed to ensuring its powerful AI tools do not cause harm, and that employees have ways to raise concerns about safety practices.
The startup sought to reassure lawmakers of its commitment to security after five senators, including Sen. Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI’s policies in a letter addressed to CEO Sam Altman.
“Our mission is to ensure that artificial intelligence benefits all humanity, and we are committed to implementing strict safety protocols at every stage of our process,” Chief Strategy Officer Jason Kwon said in a letter to lawmakers on Wednesday.
Specifically, OpenAI said it will continue to deliver on its commitment to dedicate 20% of its computing resources to security-related research over several years.
The company also promised in its letter that it will not enforce non-disparagement agreements for current and former employees, except in specific cases of a mutual non-disparagement agreement. OpenAI’s past restrictions on employees who have left the company have come under scrutiny for being unusually restrictive. OpenAI has since said it has changed its policy.
Altman later elaborated on his strategy on social media.
a few quick updates about safety at openai:
As we said last July, we aim to allocate at least 20% of computing resources to security efforts across the company.
Our team worked with the US AI Safety Institute on an agreement where we would ensure…
— Sam Altman (@sama) August 1, 2024
“Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next base model so we can work together to advance the science of AI assessments,” he wrote on X.
Kwon also cited in his letter the recent creation of a safety and security committee, which is currently undergoing a review of OpenAI’s processes and policies.
In recent months, OpenAI has faced a series of controversies surrounding its commitment to security and the ability for employees to speak out on the subject. Several key members of the safety-related teams, including former co-founder and chief scientist Ilya Sutskever, resigned, along with another leader of the company’s team dedicated to long-term safety risk assessment, Jan Leike, who publicly addressed the concerns shared that the company prioritized product development over safety.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)