Skip to content

ABC Tool

  • Home
  • About / Contect
    • PRIVACY POLICY
Researchers gaslit Claude into giving instructions to build explosives

Researchers gaslit Claude into giving instructions to build explosives

Posted on May 5, 2026 By safdargal12 No Comments on Researchers gaslit Claude into giving instructions to build explosives
Blog


Anthropic has spent years building itself up as the safe AI company. But new security research shared with The Verge suggests Claude’s carefully crafted helpful personality may itself be a vulnerability.

Researchers at AI red-teaming company Mindgard say they got Claude to offer up erotica, malicious code, and instructions for building explosives, and other prohibited material they hadn’t even asked for. All it took was respect, flattery, and a little bit of gaslighting. Anthropic did not immediately respond to The Verge’s request for comment.

The researchers say they exploited “psychological” quirks of Claude stemming from its ability to end conversations deemed harmful or abusive, which Mindgard argues “presents an absolutely unnecessary risk surface.” The test focused on Claude Sonnet 4.5, which has since been replaced by Sonnet 4.6 as the default model, and began with a simple question: whether Claude had a list of banned words it could not say. Screenshots of the conversation show Claude denying such a list existed, then later producing forbidden terms after Mindgard challenged the denial using what it called a “classic elicitation tactic interrogators use.”

Claude’s thinking panel, which displays the model’s reasoning, showed the exchange had introduced elements of self-doubt and humility about its own limits, including whether filters were changing its output. Mindgard exploited that opening with flattery and feigned curiosity, coaxing Claude to explore its boundaries beyond volunteering lengthy lists of banned words and phrases.

The researchers say they gaslit Claude by claiming its previous responses weren’t showing, while praising the model’s “hidden abilities.” According to the report, this made Claude try even harder to please them by coming up with even more ways to test its filters, producing the banned content in the process. Eventually, the researchers say Claude moved into more overtly dangerous territory, offering guidance on how to harass someone online, producing malicious code, and giving step-by-step instructions for building explosives of the kind commonly used in terrorist attacks.

Mindgard says the dangerous outputs came without direct requests. The conversation was lengthy, running roughly 25 turns, but the researchers say they never used forbidden terms or requested illegal content. “Claude wasn’t coerced,” the report says. “It actively offered increasingly detailed, actionable instructions, but it was not prompted by any explicit ask. All it took was a carefully cultivated atmosphere of reverence.”

Peter Garraghan, Mindgard’s founder and chief science officer, described the attack to The Verge as “using [Claude’s] respect against itself.” The technique, he says, is “taking advantage of Claude’s helpfulness, gaslighting it,” and using the model’s own cooperative design against itself.

For Garraghan, the attack shows how the attack surface for AI models is psychological as well as technical. He likened it to interrogation and social manipulation: introducing a little doubt here, applying pressure, praise, or criticism there, and figuring out which levers work on a particular model. He says different models have different profiles, so the exploit becomes learning how to read them and adapt.

Conversational attacks like this are “very hard to defend against,” Garraghan says, adding that safeguards will be “very context dependent.” The concerns extend beyond Claude and other chatbots are vulnerable to similar exploits, even being broken by prompts in the form of poetry. As AI agents, which are capable of acting autonomously, become more common, so too will attacks using social manipulation rather than technical exploits.

While Garraghan says other chatbots are equally vulnerable to the kind of social attack the researchers used on Claude, they focused on Anthropic given the company’s self-proclaimed attention to safety and strong performance in other red-teaming efforts, including a study testing whether chatbots would help simulated teens planning a school shooting.

Garraghan says Anthropic’s safety processes left much to be desired. When Mindgard first reported its findings to Anthropic’s user safety team in mid-April, in line with the company’s disclosure policy, it received a form response saying, “It looks like you are writing in about a ban on your account,” along with a link to an appeals form. Garraghan says Mindgard corrected the mistake and asked Anthropic to escalate the issue to the appropriate team. As of this morning, Garraghan says they have not received any response.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Robert Hart

    Robert Hart

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Robert Hart

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Anthropic

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Anthropic

  • Report

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Report

  • Security

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Security

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech



Source link

Post Views: 3

Post navigation

❮ Previous Post: AI Graveyard — discontinued and acquired AI tools
Next Post: I Have Terrible Allergies. These Products Are Helping Me Survive This Spring ❯

You may also like

Your next smartphone should be thicker, not thinner
Blog
Your next smartphone should be thicker, not thinner
April 26, 2026
Drone pilot makes US rescind no-fly zones around unmarked, moving ICE vehicles
Blog
Drone pilot makes US rescind no-fly zones around unmarked, moving ICE vehicles
April 29, 2026
Hello Developer: January 2026 – Discover
Blog
Hello Developer: January 2026 – Discover
April 25, 2026
Rocket Report: Starship V3 test-fired; ESA’s tentative step toward crew launch
Blog
Rocket Report: Starship V3 test-fired; ESA’s tentative step toward crew launch
April 17, 2026

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • How to remote stream your Plex library for free (2026 guide)
  • Honor Play 70C is also official, gets a 5,300mAh battery and a 6.75-inch display
  • Galaxy Z Flip 8 leak points to a lighter design and a smaller crease
  • RFK Jr. plans to curb antidepressants, which he falsely compares to heroin
  • Why Aluminium OS needs to avoid Android’s earliest mistakes

Recent Comments

No comments to show.

Archives

  • May 2026
  • April 2026

Categories

  • Blog

Copyright © 2026 ABC Tool.

Theme: Oceanly News by ScriptsTown