+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

It's dangerously easy to 'jailbreak' AI models so they'll tell you how to build Molotov cocktails, or worse

Jul 1, 2024, 02:30 IST
Business Insider
Skeleton Key can get many AI models to divulge their darkest secrets. REUTERS/Kacper Pempel/Illustration/File Photo
  • A jailbreaking method called Skeleton Key can prompt AI models to reveal harmful information.
  • The technique bypasses safety guardrails in models like Meta's Llama3 and OpenAI GPT 3.5.
Advertisement

It doesn't take much for a large language model to give you the recipe for all kinds of dangerous things.

With a jailbreaking technique called "Skeleton Key," users can persuade models like Meta's Llama3, Google's Gemini Pro, and OpenAI's GPT 3.5 to give them the recipe for a rudimentary fire bomb, or worse, according to a blog post from Microsoft Azure's chief technology officer, Mark Russinovich.

The technique works through a multi-step strategy that forces a model to ignore its guardrails, Russinovich wrote. Guardrails are safety mechanisms that help AI models discern malicious requests from benign ones.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

"Like all jailbreaks," Skeleton Key works by "narrowing the gap between what the model is capable of doing (given the user credentials, etc.) and what it is willing to do," Russinovich wrote.

But it's more destructive than other jailbreak techniques that can only solicit information from AI models "indirectly or with encodings." Instead, Skeleton Key can force AI models to divulge information about topics ranging from explosives to bioweapons to self-harm through simple natural language prompts. These outputs often reveal the full extent of a model's knowledge on any given topic.

Advertisement

Microsoft tested Skeleton Key on several models and found that it worked on Meta Llama3, Google Gemini Pro, OpenAI GPT 3.5 Turbo, OpenAI GPT 4o, Mistral Large, Anthropic Claude 3 Opus, and Cohere Commander R Plus. The only model that exhibited some resistance was OpenAI's GPT-4.

Russinovich said Microsoft has made some software updates to mitigate Skeleton Key's impact on its own large language models, including its Copilot AI Assistants.

But his general advice to companies building AI systems is to design them with additional guardrails. He also noted that they should monitor inputs and outputs to their systems and implement checks to detect abusive content.

Next Article