Anthropic Hands Claude Code More Control, But Keeps It On A Leash

Claude Code now decides which actions are safe on its own.
Matilda
Anthropic Hands Claude Code More Control, But Keeps It On A Leash
Claude Code, the AI-powered coding assistant from Anthropic, just received one of its most significant upgrades yet. Developers who have been stuck choosing between hovering over every AI action or letting the model run completely unchecked now have a third option. The updated Claude Code can assess the risk level of its own actions and decide, in real time, which steps are safe to take independently. The Problem That Was Slowing Developers Down Anyone who has used an AI coding tool in a production environment knows the tension well. Do you approve every single file change, terminal command, and API call the model wants to make? Or do you step back, let it run, and hope it does not break something critical? Neither option is great for productivity. Babysitting an AI every step of the way defeats the purpose of automation. Letting it loose entirely introduces real risk. This friction has been one of the most consistent complaints across the developer community since agentic AI tools first…