Insights on safe AI automation
Practical writing on AI agent security, governance, and controlled execution for engineering and operations teams.
Why Giving AI Direct API Access Is a Security Risk
Letting an LLM call your APIs directly feels convenient — until it sends a refund to the wrong customer or deletes a production record. Here's what can go wrong, and what to do instead.
Plans First: Why AI Should Think Before It Acts
The most reliable AI automation systems don't let the model call APIs directly. They let it plan first, then execute safely. Here's why that distinction matters.
The Case for Human Approval in AI Automation
Full automation is the goal, but getting there requires knowing which actions need a human in the loop — and building that in from the start.
What Is Agent Governance and Why Your Team Needs It
Agent governance is the set of controls that determine what AI agents can do, who can authorize them, and what gets logged. It's not optional for teams running AI in production.
How Private Workers Let AI Automate Internal Systems Safely
Most enterprise systems aren't publicly accessible — and they shouldn't be. Private workers let AI agents execute tasks inside your infrastructure without exposing your network.
The Registry Is the Boundary
Policies and guardrails try to stop bad AI behavior after it starts. A registry prevents it by design — the agent literally cannot call what isn't registered.
Why Guardrails Aren't Enough for AI Agents
Guardrails react to bad behavior. Constrained execution prevents it. For AI agents with real API access, the distinction matters more than most teams realize.
What to Monitor When AI Agents Run in Production
AI agents in production are not set-and-forget. Here's what to log, what to alert on, and what patterns to watch for as agent usage scales.
Constrained Execution: The Security Model AI Agents Need
Security by constraint is a proven model in software systems. It's time to apply it to AI agents — define the boundary first, then operate freely within it.
Connecting AI to Internal APIs Without Exposing Your Network
Most enterprise APIs are internal — behind VPNs, on-premise, not reachable from the internet. Here's how to give AI agents access without opening up your infrastructure.
What is Shadow MCP and How to Prevent It (2026 Guide)
Shadow MCP servers — AI agents connecting to unregistered, unvetted MCP integrations — are the shadow IT problem of 2026. Here's what they are, why they're dangerous, and the architectural control that prevents them.
AI Agent Governance Framework: 7 Controls Engineering Teams Need Before Production
Most teams treat AI agent governance as something to add later. By production, it's too late to retrofit. Here are the seven controls that need to be in place before your agents touch real business systems.
How to Connect AI Agents to Internal APIs Without Exposing Production
Most internal APIs aren't public — and shouldn't be. Here's the architecture that lets AI agents automate internal workflows without opening your network, leaking credentials, or bypassing your security controls.
MCP Server Registry: How to Allowlist Approved Servers in Production
MCP adoption is moving faster than MCP governance. Most teams using MCP in production haven't defined which servers their agents are actually allowed to connect to. Here's how to build an approved server registry and enforce it at execution time.
AI Agent Audit Log: What Engineering Teams Need for Compliance
When a compliance team asks whether your AI agent operated within authorized boundaries, 'we think so' is not an answer. A structured agent audit log is. Here's what it needs to contain and how it differs from application logs.
Engineering Agents: How to Deploy AI in Production Without Losing Control
Engineering agents — LLM-powered systems that automate engineering work — are spreading through teams with the technical sophistication to build them. Deploying them to production requires a governance layer most prototypes don't have.