When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

UK government's AI Safety Institute set to expand to San Francisco

Michelle Donelan at AI Safety Summit
Image via AISI

The British government has announced that its AI Safety Institute will open up its first overseas office in San Francisco this summer to broaden its technical expertise and turn it into a more notable AI safety body. By setting up in San Francisco, it will be better connected with companies actively working on the latest AI technologies.

When the new office opens, the first staff expected to be brought on board is a Research Director who will oversee the first team of technical staff. It will complement the London headquarters, which also hosts a team of more than 30 technical staff.

Commenting on the news, Secretary of State for Science and Technology Michelle Donelan said:

“This expansion represents British leadership in action. It is a pivotal moment in the UK’s ability to study both the risks and potential from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on safety.”

Since its founding last year, the AISI has already conducted a study to see how effective AI safeguards are in practice and more. The highlights from this study were as follows:

  • Several models completed cyber security challenges while struggling to complete more advanced challenges.
  • Several models demonstrate similar to PhD-level knowledge of chemistry and biology.
  • All tested models remain highly vulnerable to basic “jailbreaks”, and some will produce harmful outputs even without dedicated attempts to circumvent safeguards.
  • Tested models were unable to complete more complex, time-consuming tasks without humans overseeing them.

The head of AISI said that AI safety is still very young and that the results represent just a small portion of the evaluation approach AISI is working towards. It wants to continue working on its evaluations and focus on national security-linked risks.

With all the drama at OpenAI, it’s good that governments are working to ensure that AI models meet certain safety requirements. At the same time, they also need to ensure they don’t hamstring the development of these new technologies or hobble them too much so they become useless.

Source: Gov UK

Report a problem with article
ebook offer
Next Article

Python All-in-One For Dummies 3rd Edition eBook worth $27, now free to download

The OpenAI logo on glass windows
Previous Article

OpenAI says it will "pause the use" of one of its ChatGPT female AI voices

Join the conversation!

Login or Sign Up to read and post a comment.

2 Comments - Add comment