One of the first questions enterprise teams ask when evaluating AI automation is: what about our internal systems? The CRM, the ERP, the data warehouse, the internal ticketing system — none of these are accessible from the public internet, and there are good reasons to keep it that way.
The usual answer from AI tooling vendors is to expose the API publicly or set up a VPN tunnel. Both options are uncomfortable for security teams: the first expands your attack surface, the second adds complexity and creates dependencies between your internal network and an external service.
The private worker pattern
A private worker is a small process that runs inside your own infrastructure — on-premise, in your private cloud, or on any machine with access to your internal systems. It doesn't expose any ports or accept inbound connections. Instead, it polls an outbound queue for approved tasks, executes them using local credentials, and reports results back.
From your network's perspective, the worker looks like any other internal process making outbound HTTPS requests to an external queue. No firewall rules need to change. No new inbound access is created. The security posture of your internal network stays exactly as it was.
Credentials never leave your environment
Because the worker runs inside your network, it can load credentials from your existing secrets management infrastructure — HashiCorp Vault, AWS Secrets Manager, environment variables, whatever you use. These credentials are used at execution time to authenticate against internal APIs and are never sent to the cloud layer or exposed to the AI model.
This resolves a concern that comes up in almost every enterprise AI conversation: if the AI needs credentials to call my systems, where do those credentials live? With a private worker, they live where they always have — inside your environment, managed by your existing processes.
The cloud layer only sees task names and results
The cloud control plane — which handles planning, validation, and orchestration — only ever sees the task name, the validated inputs, and the output the worker returns. It never sees the internal endpoint, the credentials used, or the internal network topology. The worker is the only component that bridges the cloud and your private infrastructure.
This separation is important for compliance as well as security. Your internal API details, network architecture, and credentials are never transmitted to or stored by an external system. The audit log captures what happened without capturing how your internal systems are structured.
Getting started with a private worker
A private worker is intentionally lightweight. It needs outbound HTTPS access to pull jobs from the queue and report results. It needs access to your internal APIs. And it needs credentials to authenticate against them. Beyond that, there's no special infrastructure required — it can run as a Docker container, a systemd service, or a simple process on an existing internal server.
The setup mirrors how you'd connect any internal service to an external queue. The difference is that the jobs it processes are AI-generated plans, validated and approved before they ever reach the worker.