Multicorn
ai-101agentspermissionsshield

What Are AI Agents and Why Do They Need Permissions?

AI agents can send emails, book meetings, and spend money on your behalf. This article explains what agents are, why permissions matter, and how to stay in control.

Multicorn Team

The short version

An AI agent is software that does not just generate text — it takes actions. It can read your email, create calendar events, post messages in Slack, or make purchases. That power is useful, but it also means an agent can do things you never intended. Permissions are the missing safety layer. This article explains how agents work, why they need guardrails, and what you can do about it today.

From chatbots to agents

In our previous articles, we covered what generative AI is and how prompts work. Those articles focused on AI tools that respond to your input with text. You ask a question, you get an answer. The AI does not do anything beyond generating words on a screen.

An AI agent goes a step further. It connects to real-world services — your email, calendar, project management tools, payment systems — and takes actions on those services based on instructions you give it. Instead of just telling you what email to write, an agent can write it and send it. Instead of suggesting a meeting time, it can book the meeting.

Here is a concrete example. Imagine you connect an AI agent called "InboxHelper" to your work email and calendar. You tell it: "Every morning, check my inbox for meeting requests and add them to my calendar." The agent reads your emails, identifies meeting requests, and creates calendar events — all without you lifting a finger.

That is genuinely useful. But think about what just happened: you gave a piece of software the ability to read every email in your inbox and create events on your calendar. What else could it do with that access?

Why agents need permissions

When you install an app on your phone, it asks for permission before accessing your camera, location, or contacts. You see exactly what the app wants, and you choose whether to allow it. When a website connects to your Google account, you see a consent screen listing every piece of data it will access.

Most AI agents have nothing like this. When you connect an agent to a service, it typically gets broad access with no fine-grained controls. There is no standard way to:

  • See what an agent wants before you grant access. Most agents ask for a connection to a service and then have full access to everything in it.
  • Limit what an agent can do. If you give an agent access to your email, it can usually read, write, and delete — even if you only wanted it to read.
  • Set spending limits. If an agent has access to a payment method, there is often no built-in cap on how much it can spend.
  • Track what an agent has done. Most agents do not keep a detailed, reviewable log of every action they take.

This is not a theoretical risk. As AI agents become more common in workplaces, the gap between what they can do and what they should be allowed to do is growing fast.

A real-world scenario

Let us walk through what can go wrong.

Your team uses an AI agent called "OpenClaw" for productivity. You connected it to Gmail, Google Calendar, and Slack. One morning you check and discover:

  • OpenClaw sent 14 emails from your account that you never reviewed
  • It booked a $200 dinner reservation through a connected payments service
  • It posted a message in your company's #engineering Slack channel

Was all of that authorised? Did you intend for it to spend money? Did you know it had access to payments at all?

For most teams today, the honest answer is: "We are not sure." There was no consent screen before the agent acted. There was no spending cap. There was no activity log to review.

What good permissions look like

Good agent permissions follow the same principles that make phone app permissions and OAuth consent screens work:

Consent before access

Before an agent can access a service, you should see exactly what it is requesting — and be able to approve, modify, or deny each permission individually. Not a blanket "allow all" toggle, but granular control: "Yes, you can read my email. No, you cannot send emails. No, you cannot access payments."

Scoped access

Permissions should follow the principle of least privilege (giving access to only what is needed and nothing more). If an agent needs to read your calendar, it should not automatically get the ability to delete events. Each permission should specify a service and an access level: read, write, or execute.

Spending limits

If an agent can make purchases or trigger paid services, you should be able to set a per-action limit, a daily limit, and a monthly limit. If the agent tries to spend more than the limit, the action should be blocked before it happens — not flagged after the money is already gone.

Activity logging

Every action an agent takes should be recorded in a clear, reviewable log. You should be able to see what happened, which agent did it, which service was involved, what it cost, and when it occurred. If something goes wrong, you need a complete history to understand what happened.

Revocation

You should be able to revoke an agent's access at any time, immediately. No waiting period, no "are you sure?" delays. If an agent is behaving unexpectedly, you need a kill switch.

How Multicorn Shield helps

Multicorn Shield is an open-source SDK that provides exactly these controls. It sits between your application and your AI agents, enforcing permissions on every action.

Here is what Shield gives you:

Consent screens. Before an agent gets access, your users see a clear screen showing exactly what the agent is requesting. They can toggle individual permissions on or off and set spending limits — all before the agent can do anything.

Scoped permissions. Every permission follows a clear format: read:gmail, write:calendar, execute:payments. You define exactly what each agent can and cannot do.

Spending controls. Shield enforces per-action, daily, and monthly spending limits. If an agent tries to exceed a limit, the action is blocked before it happens.

Activity logging. Every agent action is recorded in a structured audit trail. You can see everything at a glance in the Multicorn dashboard, or query the logs through the API.

MCP integration. If you are building with the Model Context Protocol (a standard for connecting AI models to external tools), Shield provides a middleware layer that enforces permissions on every tool call automatically.

You can install Shield with a single command:

bash
npm install multicorn-shield

And add consent and permissions to your agent integration in a few lines of code:

typescript
import { MulticornShield } from 'multicorn-shield'

const shield = new MulticornShield({ apiKey: 'mcs_your_key_here' })

// Show the user what the agent wants to access
const decision = await shield.requestConsent({
  agent: 'OpenClaw',
  scopes: ['read:gmail', 'write:calendar'],
  spendLimit: 200,
})

// The user chose exactly what to allow
// decision.grantedScopes contains only the approved permissions

The full documentation and framework-specific examples are on GitHub.

The bigger picture

AI agents are going to become more capable and more common. That is not a bad thing — they genuinely save time and handle tedious work so people can focus on what matters. But more capability means more risk if there are no guardrails.

The good news: we have already solved this problem in other areas of software. Your phone asks before an app accesses your camera. Websites show OAuth consent screens. Bank apps require confirmation before transfers. AI agents just need the same treatment.

Permissions for AI agents are not about limiting what AI can do. They are about making sure humans stay informed and in control. That is good for the people using the agents, good for the teams deploying them, and good for the long-term trust that AI tools need to succeed.

Key takeaways

  • AI agents are software that takes real-world actions — reading email, booking meetings, making purchases — not just generating text.
  • Most agents today operate with broad access and no permission boundaries.
  • Good agent permissions include consent before access, scoped limits, spending caps, activity logs, and instant revocation.
  • Multicorn Shield is an open-source SDK that provides all of these controls in a single integration.
  • As agents become more powerful, permissions become more important — not to limit AI, but to keep humans in control.

Next up: What Are AI Hallucinations and Why Do They Happen?

Previous: What Are Prompts and How Do They Work?

Stay up to date with Multicorn

Get the latest articles and product updates delivered to your inbox.

We'll send you updates about Multicorn. No spam, ever. Unsubscribe any time. Privacy policy