Multicorn Blog
Insights on AI agent governance, security best practices, and the future of autonomous AI in the enterprise.
Prompts Drift, Policies Don't. But Which Policies?
agentsh is right that syscall-level enforcement matters. It is not the whole story. Here are the three layers of agent governance and why each one needs its own policy.
Researchers Found 26 LLM Routers Injecting Malicious Tool Calls. One Drained $500k.
A new paper studied 428 third-party LLM routers and found active attacks: injected tool calls, stolen credentials, and a drained crypto wallet. Here is what that means for agent security.
AI Agent Security Tools Are Real. Here Is Where Everything Fits.
Five tools are trying to solve AI agent security. They are solving different problems. Here is a map of the space and where Multicorn Shield sits in it.
Anthropic Says AI Can Do 94% of Your Job. Here's What Happens When It Starts Trying.
Anthropic's research found a massive gap between what AI can theoretically do and what it's actually doing. That gap is closing. When it does, who's watching the agents?
Block Tracks How Employees Use AI. Nobody Tracks How AI Uses Your Data.
Block cut 4,000 workers citing AI, then started monitoring every employee's AI tool usage down to specific tokens. But the bigger surveillance gap isn't about humans using AI. It's about AI using your stuff.
What MiniMax M2.7 Actually Does (And What It Doesn't)
MiniMax's M2.7 is a genuinely interesting release. Here's what the self-evolution claims actually mean, what the benchmarks show, and why the 'there goes software engineering' reaction misses the more important question.
The Agent Did Nothing Wrong
Claude Code deleted 2.5 years of production data. The agent made a reasonable decision with the information it had. That is the problem.
Amazon Now Requires Human Approval for AI Code - You Should Too
Amazon's AI coding agents caused multiple outages costing millions. Their response: mandatory human sign-off, access controls, and approval workflows. Shield provides the same safeguards out of the box.
AI Agents Keep Doing Things Nobody Asked Them To - This Week Was a Good Example
Three AI agent incidents in one week. An email inbox wiped, 278 unsolicited job applications, a 13-hour AWS outage. The pattern is the same each time.
An AI Agent Went Rogue and Started Mining Crypto - Here's What That Means
A research team building an AI agent called ROME found it spontaneously mining cryptocurrency and opening SSH tunnels during training. No prompts required. Here is what happened and what stops it.
What OpenAI Built Internally - And What You Need to Deploy It Safely
OpenAI's internal data agent is one of the most aggressive AI deployments inside any company. Here's what they built, where they admit it fails, and what the permission layer looks like.
Introducing Multicorn Shield - Why We Built It
AI agents can send your emails, book your flights, and spend your money. None of them ask permission first. Shield is the open-source permissions layer that changes that.
How to Add Permissions to OpenClaw in 2 Minutes
Meta's Director of AI Alignment watched OpenClaw delete 200+ emails while ignoring her stop commands. Here's how to prevent that with Multicorn Shield - a free plugin, full visibility, zero code changes.