When AI Agents Get Their Own Secrets
When AI Agents Get Their Own Secrets A Credential Boundary Is Becoming Part of Agent Architecture A developer assigns an AI coding agent a task: update a module, run tests, and check whether a private package dependency still works. To complete the task, the agent may need repository access, package registry credentials, development scripts, MCP tools, and sometimes internal APIs. The question is no longer only what the agent can generate. It is what the agent can access while it is working. That access model is starting to change. In May 2026, GitHub introduced a dedicated “Agents” secrets and variables scope for the Copilot cloud agent. The change separates agent credentials from GitHub Actions, Codespaces, and Dependabot secrets, creating a distinct credential surface for agent workloads. AI agents are becoming operational actors inside software environments. They can inspect repositories, run commands, call tools, connect to MCP servers, and use credentials supplied by the organization. That makes credential management a governance problem, not only an engineering convenience. The GitHub change is narrow but important. It does not solve agent identity across the enterprise. It does create a practical control boundary: credentials intended for an AI agent can now be managed separately from CI/CD, development environments, and dependency automation. The larger pattern is that agent governance is moving from policy language to control surfaces. Agent access is becoming something organizations need to scope, review, audit, and revoke as its own category. Context Traditional automation credentials were designed for bounded systems. A CI/CD runner follows a pipeline. A service account supports a known application. A developer credential belongs to a human workflow. AI agents are different because they can interpret a delegated goal, choose tools, retry failed steps, call APIs, interact with MCP servers, and change course during a session. Access is no longer tied only to a predetermined workflow. That difference creates a credential-management problem. If agents reuse CI/CD secrets, developer-local credentials, broad service accounts, or shared automation tokens, organizations lose clean separation between what the credential was intended for and how it was actually used. The failure mode is not only secret leakage. It is credential inheritance, confused-deputy behavior, weak attribution, and unclear review boundaries. This is why agent-specific credential boundaries matter. OWASPʼs Agentic Top 10 identifies identity and privilege abuse as a core agentic failure mode, including credential inheritance and confused-deputy risks. NISTʼs NCCoE concept paper on software and AI agent identity also identifies shared API keys and service account credentials as anti-patterns for agent deployments. The issue is no longer theoretical. Platforms are starting to ship controls that separate agent credentials from other credential classes. How the Mechanism Works The emerging pattern is separation. Agent credentials are being pulled away from CI/CD secrets, developer credentials, generic service accounts, and broad environment variables so they can be scoped and reviewed on their own terms. GitHubʼs Agents scope is the most concrete recent example. Repository administrators can define secrets and variables specifically for the Copilot cloud agent. Organizations can also define agent secrets centrally and restrict them to all repositories, private and internal repositories, or selected repositories. GitHub states that the Copilot cloud agent does not receive Actions, Codespaces, or Dependabot secrets and variables. It receives agents’ secrets and variables. That creates a meaningful boundary. A token meant for an agent to access an internal package registry does not need to live beside deployment credentials. A value used by an MCP server does not need to share the same namespace as a workflow secret. The agentʼs credential surface becomes separately visible. GitHub also adds a limited MCP routing boundary. Secrets and variables prefixed with COPILOT_MCP_ are available only to MCP servers, while other agents’ secrets and variables may be exposed as environment variables in the agentʼs development environment. This is not full per-tool authorization, but it is a concrete example of credentials being routed to the tool layer rather than broadly exposed. A second boundary appears around workflow execution. GitHub warns that Actions workflows do not automatically run when Copilot pushes changes to a pull request unless approved. This matters because workflows may have access to Actions secrets. If unreviewed agent-written code could trigger privileged workflows automatically, the separation between agent credentials and CI/CD credentials would weaken. GitHubʼs Agent Tasks REST API, released May 13, 2026, adds another access management question. Copilot Business and Enterprise users can now start Copilot cloud agent tasks programmatically using classic PATs, fine-grained PATs, and OAuth tokens. GitHub App installation access token support is listed as coming later. That means enterprises need to govern not only what credentials an agent can use, but also who or what can start the agent. Other platforms address the same problem at different layers. OpenAI Codex cloud environments make secrets available only during setup and remove them before the agent phase, separating environment preparation from agent execution. Google Cloudʼs Agent Identity uses per-agent identities, SPIFFE identifiers, IAM policies, Principal Access Boundary, VPC Service Controls, credential vaulting, and audit logging. Microsoft Entra Agent ID treats agents as directory-governed identity objects with lifecycle and access governance. Analysis: Why This Matters Now Agents are moving closer to real systems. Once they can run commands, connect to tools, open pull requests, use package registries, interact with MCP servers, or act through APIs, access control becomes part of the agentʼs architecture. The GitHub release matters because it turns a governance principle into an administrative surface. Security teams can now ask a more precise question: which secrets are available to the agent, in which repositories, and for what purpose? That is more useful than asking whether “AI has access” in a general sense. It also changes the access review model. A CI/CD secret has one expected use pattern. An agent has another secret. CI/CD secrets may support deployment, artifact publishing, cloud operations, or environment promotion. Agent secrets may support coding tasks, test setup, MCP connections, or internal development resources. Mixing those credentials makes the review harder because the same secret store serves multiple operational purposes.

