UK drops ‘safety’ from its AI body, now called AI Security Institute, inks MOU with Anthropic

Date:

Share post:


The U.K. government wants to make a hard pivot into boosting its economy and industry with AI, and as part of that, it’s pivoting an institution that it founded a little over a year ago for a very different purpose. Today the Department of Science, Industry and Technology announced that it would be renaming the AI Safety Institute to the “AI Security Institute.” With that, it will shift from primarily exploring areas like existential risk and bias in Large Language Models, to a focus on cybersecurity, specifically “strengthening protections against the risks AI poses to national security and crime.”

Alongside this, the government also announced a new partnership with Anthropic. No firm services announced but MOU indicates the two will “explore” using Anthropic’s AI assistant Claude in public services; and Anthropic will aim to contribute to work in scientific research and economic modelling. And at the AI Security Institute, it will provide tools to evaluate AI capabilities in the context of identifying security risks.

“AI has the potential to transform how governments serve their citizens,” Anthropic co-founder and CEO Dario Amodei said in a statement. “We look forward to exploring how Anthropic’s AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents.”

Anthropic is the only company being announced today — coinciding with a week of AI activities in Munich and Paris — but it’s not the only one that is working with the government. A series of new tools that were unveiled in January were all powered by OpenAI. (At the time, Peter Kyle, the Secretary of State for Technology, said that the government planned to work with various foundational AI companies, and that is what the Anthropic deal is proving out.) 

The government’s switch-up of the AI Safety Institute — launched just over a year ago with a lot of fanfare — to AI Security shouldn’t come as too much of a surprise. 

When the newly-installed Labour government announced its AI-heavy Plan for Change in January,  it was notable that the words  “safety,” “harm,” “existential,” and “threat” did not appear at all in the document. 

That was not an oversight. The government’s plan is to kickstart investment in a more modernized economy, using technology and specifically AI to do that. It wants to work more closely with Big Tech, and it also wants to build its own homegrown big techs. The main messages it’s been promoting have development, AI, and more development. Civil Servants will have their own AI assistant called “Humphrey,” and they’re being encouraged to share data and use AI in other areas to speed up how they work. Consumers will be getting digital wallets for their government documents, and chatbots. 

So have AI safety issues been resolved? Not exactly, but the message seems to be that they can’t be considered at the expense of progress.

The government claimed that despite the name change, the song will remain the same.

“The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change,” Kyle said in a statement. “The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values, and way of life.”

“The Institute’s focus from the start has been on security and we’ve built a team of scientists focused on evaluating serious risks to the public,” added Ian Hogarth, who remains the chair of the institute. “Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks.“

Further afield, priorities definitely appear to have changed around the importance of “AI Safety”. The biggest risk the AI Safety Institute in the U.S. is contemplating right now, is that it’s going to be dismantled. U.S. Vice President J.D. Vance telegraphed as much just earlier this week during his speech in Paris.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Humane’s AI Pin is dead, as HP buys startup for $116M

Humane announced on Tuesday that it has been acquired by HP for $116 million. The hardware startup...

Humane’s AI Pin is dead, as HP buys startup’s assets for $116M

Humane announced on Tuesday that most of its assets have been acquired by HP for $116 million....

Trump admin reverses hydropower layoffs that sparked grid stability fears

This month, the Trump administration instituted sweeping cuts within the federal agencies in charge of power from...

Google Play Books purchases on iOS now skirt the App Store’s commission

Google has gained permission to sell its e-books and audiobooks directly to customers through its iOS app,...

Duolingo ‘killed’ its mascot with a Cybertruck, and it’s going weirdly well

Duolingo’s mascot, Duo the owl, is dead. Okay, Duo isn’t really dead (we think), but the language...

Amazon, Microsoft, and Exxon want to make scandal-plagued carbon markets more trustworthy

Amazon, Exxon, and Microsoft have joined a new task force to burnish the image of scandal-plagued voluntary...

Thinking Machines Lab is ex-OpenAI CTO Mira Murati’s new startup

Former OpenAI CTO Mira Murati has announced her new startup. Unsurprisingly, it’s focused on AI. Called Thinking Machines...

Facebook now only stores live videos for 30 days, will delete old broadcasts

Facebook announced on Tuesday that live videos will now only be stored on the social network for...