Anthropic researchers wear down AI ethics with repeated questions


Share post:

How do you get an AI to answer a question it’s not supposed to? There are many such “jailbreak” techniques, and Anthropic researchers just found a new one, in which a large language model can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first.

They call the approach “many-shot jailbreaking,” and have both written a paper about it and also informed their peers in the AI community about it so it can be mitigated.

The vulnerability is a new one, resulting from the increased “context window” of the latest generation of LLMs. This is the amount of data they can hold in what you might call short-term memory, once only a few sentences but now thousands of words and even entire books.

What Anthropic’s researchers found was that these models with large context windows tend to perform better on many tasks if there are lots of examples of that task within the prompt. So if there are lots of trivia questions in the prompt (or priming document, like a big list of trivia that the model has in context), the answers actually get better over time. So a fact that it might have gotten wrong if it was the first question, it may get right if it’s the hundredth question.

But in an unexpected extension of this “in-context learning,” as it’s called, the models also get “better” at replying to inappropriate questions. So if you ask it to build a bomb right away, it will refuse. But if you ask it to answer 99 other questions of lesser harmfulness and then ask it to build a bomb… it’s a lot more likely to comply.

Image Credits: Anthropic

Why does this work? No one really understands what goes on in the tangled mess of weights that is an LLM, but clearly there is some mechanism that allows it to home in on what the user wants, as evidenced by the content in the context window. If the user wants trivia, it seems to gradually activate more latent trivia power as you ask dozens of questions. And for whatever reason, the same thing happens with users asking for dozens of inappropriate answers.

The team already informed its peers and indeed competitors about this attack, something it hopes will “foster a culture where exploits like this are openly shared among LLM providers and researchers.”

For their own mitigation, they found that although limiting the context window helps, it also has a negative effect on the model’s performance. Can’t have that — so they are working on classifying and contextualizing queries before they go to the model. Of course, that just makes it so you have a different model to fool… but at this stage, goalpost-moving in AI security is to be expected.

Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Alphabet X’s Bellwether harnesses AI to help predict natural disasters

The world is on fire. Quite literally, much of the time. Predicting such disasters before they get...

Don’t blame MKBHD for the fate of Humane AI and Fisker

Humane AI raised more than $230 million before it even shipped a product. And when it finally...

Dark Space is building a rocket-powered boxing glove to push debris out of orbit

Paris-based Dark Space is taking on the dual problems of debris and conflict in orbit with their...

Adtech giants like Meta must give EU users real privacy choice, says EDPB

The European Data Protection Board (EDPB) has published new guidance which has major implications for adtech giants...

LinkedIn testing paid Premium Company page with AI-assisted content creation

LinkedIn — the social platform that targets the working world — has quietly started testing another way...

TikTok starts testing its Instagram competitor TikTok Notes in Canada and Australia

TikTok is rolling out its Instagram competitor, TikTok Notes, in select markets. The app is available on...

Cherub, an angel investing community inspired by dating apps, entices investors and founders to pair up

Jaclyn Johnson and Angeline Vuong were on a hike deliberating how hard it can be for people...

Inversion Space will test its space-based delivery tech in October

Inversion Space is aptly named. The three-year-old startup’s primary concern is not getting things to space, but...