When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Sam Altman and Greg Brockman issue response to Jan Leike's safety concerns

The OpenAI logo on glass windows

Sam Altman and Greg Brockman, both senior figures at OpenAI, have written a response about AI safety concerns at the company raised by Jan Leike, now a former employee after he resigned this week. The pair said that OpenAI is committed to using a “very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and a harmony of safety capabilities.”

Altman and Brockman said that more safety research will be carried out targeting different timescales and collaboration will be done with governments and stakeholders to make sure nothing is missed regarding safety.

As a bit of background, Jan Leike was leading the super alignment team with Ilya Sutskever, which was set up less than a year ago in an attempt to find ways of controlling superintelligent AIs. Both men left the firm this week after complaining that safety seemed to be taking a backseat at the firm in favour of new advancements.

The announcement by OpenAI is a bit long-winded and vague in terms of the point they’re trying to get across. The final paragraph seems to hold the most clues, it reads:

“There's no proven playbook for how to navigate the path to AGI. We think that empirical understanding can help inform the way forward. We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions.”

Essentially, it comes across as if they’re saying the best way to do safety testing is as an actual product is being developed rather than trying to anticipate some hypothetical super AI that could appear in the future.

The full statement from Altman and Brockman reads as follows:

We’re really grateful to Jan for everything he's done for OpenAI, and we know he'll continue to contribute to the mission from outside. In light of the questions his departure has raised, we wanted to explain a bit about how we think about our overall strategy.

First, we have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it. We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks.

Second, we have been putting in place the foundations needed for safe deployment of increasingly capable systems. Figuring out how to make a new technology safe for the first time isn't easy. For example, our teams did a great deal of work to bring GPT-4 to the world in a safe way, and since then have continuously improved model behavior and abuse monitoring in response to lessons learned from deployment.

Third, the future is going to be harder than the past. We need to keep elevating our safety work to match the stakes of each new model. We adopted our Preparedness Framework last year to help systematize how we do this.

This seems like as good of a time as any to talk about how we view the future.

As models continue to become much more capable, we expect they'll start being integrated with the world more deeply. Users will increasingly interact with systems — composed of many multimodal models plus tools — which can take actions on their behalf, rather than talking to a single model with just text inputs and outputs.

We think such systems will be incredibly beneficial and helpful to people, and it'll be possible to deliver them safely, but it's going to take an enormous amount of foundational work. This includes thoughtfulness around what they're connected to as they train, solutions to hard problems such as scalable oversight, and other new kinds of safety work. As we build in this direction, we're not sure yet when we’ll reach our safety bar for releases, and it’s ok if that pushes out release timelines.

We know we can't imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities. We will keep doing safety research targeting different timescales. We are also continuing to collaborate with governments and many stakeholders on safety.

There's no proven playbook for how to navigate the path to AGI. We think that empirical understanding can help inform the way forward. We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions.

— Sam and Greg

Let us know in the comments what you think about the situation.

Source: X | Image via Depositphotos.com

Report a problem with article
Westminster Bridge and Big Ben at night
Next Article

The UK government is working on rules to increase the transparency of AI training data

The GNOME desktop
Previous Article

Early work on a new GNOME OS installer has begun thanks to Germany's Sovereign Tech Fund

Join the conversation!

Login or Sign Up to read and post a comment.

1 Comment - Add comment