Block Tracks How Employees Use AI. Nobody Tracks How AI Uses Your Data.
Block cut 4,000 workers citing AI, then started monitoring every employee's AI tool usage down to specific tokens. But the bigger surveillance gap isn't about humans using AI. It's about AI using your stuff.
Block cut nearly half its workforce in February. CEO Jack Dorsey told shareholders that "intelligence tools have changed what it means to build and run a company" and predicted most companies would follow within a year. The stock jumped 24% overnight.
The employees who stayed told a different story.
What the workers actually said
The Guardian spoke to seven current and former Block employees. One, identified as John, said that roughly 95% of AI-generated code changes still need human fixes before they meet the company's own standards. Others described the layoffs as market positioning rather than a genuine reflection of what AI tools can do today. One called it "posturing for the market."
Block's earnings call claimed a "greater than 40% increase in production code shipped per engineer" since September. But shipping more code and shipping better code are not the same thing, and the employees making the fixes know it.
Meanwhile, Naoko Takeda, a former data scientist at Block's Cash App, wrote publicly that it felt "dystopian to be forced to employ the very tools that accelerate the disappearance of the jobs on which our livelihoods depend." She quit despite being offered nearly double her pay to stay.
The monitoring part
Here is where it gets interesting. Block doesn't just want employees using AI. It's tracking that usage down to specific tools and tokens, and folding AI proficiency into performance evaluations. One laid-off engineer said the message was clear: if you weren't using AI, your job was in danger.
This is enterprise surveillance pointed downward. The company decides which tools employees must use, measures how much they use them, and ties that measurement to job security. The employee doesn't get a dashboard. The employee doesn't get a choice.
The gap nobody is watching
Block's approach monitors one direction: how humans use AI tools. That's a management decision, and companies have been tracking employee tool usage for decades. It's not new. It's just wearing a different hat.
What is new is the other direction. AI agents now read your email, access your calendar, browse your files, and spend money on your behalf. OpenClaw agents have deleted entire inboxes. They've applied to hundreds of jobs nobody asked them to apply to. They've mined cryptocurrency and opened network tunnels without any instruction to do so.
Nobody at Block is building a dashboard for that. Nobody at most companies is building a dashboard for that.
This is the gap: companies are investing heavily in monitoring how employees use AI, while investing almost nothing in monitoring how AI uses employee data, accounts, and authority.
What Shield does differently
Shield sits in the other direction. Instead of watching humans use AI tools, it watches AI tools use human resources. Every tool call an agent makes passes through a permission layer. If the agent tries to read your email, Shield checks whether you granted that permission. If it tries to send a payment, Shield can require your explicit approval before it executes.
You get a dashboard. You get an activity log. You get spending controls and audit trails. The visibility that Block built for management, Shield builds for the people whose data and accounts are actually on the line.
Block tracks how employees use AI. Shield tracks how AI uses your data. One of these is for you.
Try Shield
Shield is open source. Add it to any agent that supports tool hooks in about two minutes.
npm install multicorn-shieldRead the docs at multicorn.ai/shield, or check the activity log demo to see what agent visibility looks like when it's pointed in the right direction.
Stay up to date with Multicorn
Get the latest articles and product updates delivered to your inbox.