News

Anthropic's latest feature for two of its Claude AI models could be the beginning of the end for the AI jailbreaking ...
Anthropic has said that their Claude Opus 4 and 4.1 models will now have the ability to end conversations that are “extreme ...
Anthropic has given its chatbot, Claude, the ability to end conversations it deems harmful. You likely won't encounter the ...
Claude won't stick around for toxic convos. Anthropic says its AI can now end extreme chats when users push too far.
While Meta's recently exposed AI policy explicitly permitted troubling sexual, violent, and racist content, Anthropic adopted ...
In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. The ...
By empowering Claude to exit abusive conversations, Anthropic is contributing to ongoing debates about AI safety, ethics, and ...
Can exposing AI to “evil” make it safer? Anthropic’s preventative steering with persona vectors explores controlled risks to ...
Anthropic’s Claude Code now features continuous AI security reviews, spotting vulnerabilities in real time to keep unsafe ...
Dario Amodei said he believes Anthropic employees are largely staying because of "true belief in the mission and belief in ...
While an Anthropic spokesperson confirmed that the AI firm did not acquire Humanloop or its IP, that’s a moot point in an ...
Anthropic has announced a new experimental safety feature that allows its Claude Opus 4 and 4.1 artificial intelligence ...