G360 Technologies

When AI Agents Get Their Own Secrets

When AI Agents Get Their Own Secrets

A Credential Boundary Is Becoming Part of Agent Architecture

A developer assigns an AI coding agent a task: update a module, run tests, and check whether a private package dependency still works. To complete the task, the agent may need repository access, package registry credentials, development scripts, MCP tools, and sometimes internal APIs.

The question is no longer only what the agent can generate. It is what the agent can access while it is working.

That access model is starting to change. In May 2026, GitHub introduced a dedicated “Agents” secrets and variables scope for the Copilot cloud agent. The change separates agent credentials from GitHub Actions, Codespaces, and Dependabot secrets, creating a distinct credential surface for agent workloads.

AI agents are becoming operational actors inside software environments. They can inspect repositories, run commands, call tools, connect to MCP servers, and use credentials supplied by the organization. That makes credential management a governance problem, not only an engineering convenience.

The GitHub change is narrow but important. It does not solve agent identity across the enterprise. It does create a practical control boundary: credentials intended for an AI agent can now be managed separately from CI/CD, development environments, and dependency automation.

The larger pattern is that agent governance is moving from policy language to control surfaces. Agent access is becoming something organizations need to scope, review, audit, and revoke as its own category.

Context

Traditional automation credentials were designed for bounded systems. A CI/CD runner follows a pipeline. A service account supports a known application. A developer credential belongs to a human workflow.

AI agents are different because they can interpret a delegated goal, choose tools, retry failed steps, call APIs, interact with MCP servers, and change course during a session. Access is no longer tied only to a predetermined workflow.

That difference creates a credential-management problem. If agents reuse CI/CD secrets, developer-local credentials, broad service accounts, or shared automation tokens, organizations lose clean separation between what the credential was intended for and how it was actually used. The failure mode is not only secret leakage. It is credential inheritance, confused-deputy behavior, weak attribution, and unclear review boundaries.

This is why agent-specific credential boundaries matter. OWASPʼs Agentic Top 10 identifies identity and privilege abuse as a core agentic failure mode, including credential inheritance and confused-deputy risks. NISTʼs NCCoE concept paper on software and AI agent identity also identifies shared API keys and service account credentials as anti-patterns for agent deployments.

The issue is no longer theoretical. Platforms are starting to ship controls that separate agent credentials from other credential classes.

How the Mechanism Works

The emerging pattern is separation. Agent credentials are being pulled away from CI/CD secrets, developer credentials, generic service accounts, and broad environment variables so they can be scoped and reviewed on their own terms.

GitHubʼs Agents scope is the most concrete recent example. Repository administrators can define secrets and variables specifically for the Copilot cloud agent. Organizations can also define agent secrets centrally and restrict them to all repositories, private and internal repositories, or selected repositories. GitHub states that the Copilot cloud agent does not receive Actions, Codespaces, or Dependabot secrets and variables. It receives agents’ secrets and variables.

That creates a meaningful boundary. A token meant for an agent to access an internal package registry does not need to live beside deployment credentials. A value used by an MCP server does not need to share the same namespace as a workflow secret. The agentʼs credential surface becomes separately visible.

GitHub also adds a limited MCP routing boundary. Secrets and variables prefixed with COPILOT_MCP_ are available only to MCP servers, while other agents’ secrets and variables may be exposed as environment variables in the agentʼs development environment. This is not full per-tool authorization, but it is a concrete example of credentials being routed to the tool layer rather than broadly exposed.

A second boundary appears around workflow execution. GitHub warns that Actions workflows do not automatically run when Copilot pushes changes to a pull request unless approved. This matters because workflows may have access to Actions secrets. If unreviewed agent-written code could trigger privileged workflows automatically, the separation between agent credentials and CI/CD credentials would weaken.

GitHubʼs Agent Tasks REST API, released May 13, 2026, adds another access management question. Copilot Business and Enterprise users can now start Copilot cloud agent tasks programmatically using classic PATs, fine-grained PATs, and OAuth tokens. GitHub App installation access token support is listed as coming later. That means enterprises need to govern not only what credentials an agent can use, but also who or what can start the agent.

Other platforms address the same problem at different layers. OpenAI Codex cloud environments make secrets available only during setup and remove them before the agent phase, separating environment preparation from agent execution. Google Cloudʼs Agent Identity uses per-agent identities, SPIFFE identifiers, IAM policies, Principal Access Boundary, VPC Service Controls, credential vaulting, and audit logging. Microsoft Entra Agent ID treats agents as directory-governed identity objects with lifecycle and access governance.

Analysis: Why This Matters Now

Agents are moving closer to real systems. Once they can run commands, connect to tools, open pull requests, use package registries, interact with MCP servers, or act through APIs, access control becomes part of the agentʼs architecture.

The GitHub release matters because it turns a governance principle into an administrative surface. Security teams can now ask a more precise question: which secrets are available to the agent, in which repositories, and for what purpose? That is more useful than asking whether “AI has access” in a general sense.

It also changes the access review model. A CI/CD secret has one expected use pattern. An agent has another secret. CI/CD secrets may support deployment, artifact publishing, cloud operations, or environment promotion. Agent secrets may support coding tasks, test setup, MCP connections, or internal development resources. Mixing those credentials makes the review harder because the same secret store serves multiple operational purposes.

The larger governance issue is attribution. If an agent uses a shared token or an inherited developer credential, logs may show that the token was used, but not whether the action came from a human, a pipeline, a service account, or an AI agent acting within a delegated task. Separating agent credentials does not solve every attribution problem, but it gives investigators a clearer starting point.

This is also where OWASPʼs ASI03 category is useful. Credential inheritance, confused-deputy behavior, and privilege abuse are named agentic security failure modes. That gives security, platform, and audit teams a shared vocabulary for a risk that otherwise gets flattened into generic “AI access” language.

There is also an incident-response benefit. Google Cloudʼs Agent Platform Threat Detection documentation points to agent-linked abuse patterns such as anomalous metadata service access, unauthorized service-account API calls, suspicious token generation, excessive permission-denied events, and cross-project token activity. These detections depend on the ability to distinguish agent behavior from other workload behavior.

Implications for Enterprises

The first implication is inventory and ownership. Enterprises need to know where agents receive credentials today, whether those credentials are separate from CI/CD and developer secrets, and which repositories, tools, environments, or MCP servers can use them. Agent credentials should have clear owners, review cycles, expiration rules, and revocation paths.

The second implication is evidence. The useful governance evidence is not simply that a secret exists. It is which agent can use it, where it can be used, whether it is separate from CI/CD secrets, whether workflow execution requires approval, whether access is logged, and whether suspicious credential behavior can be detected.

The third implication is architecture. Static environment secrets may not be enough for agent workloads. Over time, enterprises may need task-scoped or tool-scoped capabilities that are issued only when needed, expire quickly, and preserve a record of why access was granted. NISTʼs NCCoE concept paper points in this direction by identifying shared API keys and generic service-account credentials as a gap for software and AI agent identity.

The final implication is fragmentation. GitHub, OpenAI, Google, Microsoft, Anthropic, and MCP-based tooling are addressing the problem through different control layers. Enterprises should not expect one common operating model yet. A practical near-term approach is to separate agent credentials where platform controls exist, restrict agent access by repository or environment, and define review rules before agent usage spreads across more tools.

Risks and Open Questions

The first risk is boundary erosion. GitHubʼs separation between Agents secrets and Actions secrets is useful, but GitHub also warns that automatically running workflows from Copilot-authored pull requests can expose Actions secrets to unreviewed code. If administrators remove approval gates, agent-generated changes may interact with privileged CI/CD workflows in ways that weaken the intended separation.

The second risk is overexposure inside the agent environment. GitHub masks secrets in logs, and MCP-prefixed values are routed to MCP servers, but the documentation does not describe fine-grained per-command or per-tool access control for all agent environment variables.

The third risk is inherited human privilege. When agents reuse developer credentials, they may inherit broad permissions that were granted for human workflows. Even if the human account is protected by MFA, the agentʼs runtime use of that credential may not preserve the same interactive control point.

The fourth risk is automating agent initiation without the right identity model. GitHubʼs Agent Tasks REST API makes it possible to start Copilot cloud agent tasks programmatically, but GitHub App installation access token support is not yet available. Until that support exists, organizations need to evaluate how PATs and OAuth tokens are governed for agent-task creation.

The last open question is granularity. Repository-level and organization-level secret scoping is a major improvement over shared credential stores, but enterprise requirements may move toward per-tool credentials, per-task credentials, short-lived tokens, approval-based privilege elevation, policy-bound MCP access, and automated access reviews tied to agent identity.

Further Reading

  • GitHub Changelog, “More flexible secrets and variables for Copilot cloud agent.”
  • GitHub Docs, “Configuring secrets and variables for Copilot cloud agent.”
  • GitHub Docs, “Configuring settings for GitHub Copilot cloud agent.”
  • GitHub Changelog, “Start Copilot cloud agent tasks via the REST API.”
  • Google Cloud Docs, “Agent Identity overview.”
  • Google Cloud Docs, “Agent Platform Threat Detection overview.”
  • OpenAI Developers, “Cloud environments, Codex web.”
  • OpenAI Developers, “Agent approvals and security, Codex.”
  • Anthropic Docs, “Claude Code settings.”
  • Anthropic Docs, “Connect Claude Code to tools via MCP.”
  • OWASP Top 10 for Agentic Applications
  • NIST NCCoE, “Accelerating the Adoption of Software and AI Agent Identity and Authorization”
  • Microsoft Learn, “What is Microsoft Entra Agent ID?”
  • Microsoft Learn, “Governing Agent Identities”