We recently submitted formal comments to NIST's National Cybersecurity Center of Excellence on its concept paper for Software and AI Agent Identity and Authorization. Here, we’ll discuss what we said and why it matters.
If you’ve spent any time on LinkedIn or X (Twitter) in the past few months, you know that AI agents are no longer hypothetical. Organizations are deploying them to automate workflows, generate and maintain code, orchestrate business processes, and make decisions at scale. The productivity gains are real, but so are the risks.
NIST recognizes this. In February 2026, the NCCoE released a concept paper asking how existing identity standards should apply to AI agents operating across enterprise systems. The questions are foundational:
- How should agents be identified?
- How should they authenticate?
- What authorization models can constrain their behavior when their actions are inherently less predictable than traditional software?
We responded because these questions sit at the center of what we build. Our digital identity infrastructure (verifiable digital credential libraries, identity wallets, and credential platform services) was designed for exactly the kind of delegation, authorization, and trust challenges that agentic architectures introduce.
AI Agents Need Their Own Identity
The current default for agent authentication relies heavily on OAuth 2.0 delegation flows that require human-in-the-loop approval for each interaction. That model creates friction that will be difficult to scale.
We proposed a different approach: AI agents should possess their own identity and a set of capabilities granted programmatically for specific, human-understandable purposes. This provides a natural checkpoint for governance and review without per-interaction overhead.
Agent identity should be persistent and organization-bound, anchored to the hardware, software, and ownership context in which the agent operates. Capabilities, by contrast, should be ephemeral - task-scoped, time-bound, and issued dynamically based on workflow requirements. This separation allows organizations to maintain a stable identity for auditing and governance while adjusting permissions as responsibilities change.
Every agent performing work must have a unique identifier tied to its owning organization, its operating context, its authorized capabilities, and its delegation chain back to a responsible legal person. When actions carry consequences, attribution cannot be optional.
Defense in Depth Around Every Agent
AI agents differ from traditional software because the generative nature of AI makes their problem-solving inherently less predictable. That demands a fundamentally different security model.
We recommended building a zero-trust boundary around each agent, combining multiple layers of protection:
- Cryptographic identity anchored to validated hardware and software environments, aligned to FIPS 140-2/140-3 and drawing from NIST SP 800-63 assurance levels.
- Capability-based authorization where policies are represented as verifiable digital credentials in the agent's identity wallet, forming a portable, inspectable manifest of what the agent may do.
- Formally verifiable policy languages such as CedarLang or WebAssembly (WASM), enabling organizations to model and analyze the full permission envelope for each agent.
- Continuous monitoring of agent actions and outputs, analogous to intrusion detection but specifically tailored for agentic behavior.
Authorization policies should be cryptographically signed artifacts, portable, verifiable, and independently validated across systems. This is a significant improvement over ad hoc, resource-level permissions that are difficult to manage and reason about at scale.
When an agent requires capabilities beyond its current grants, the request should be structured as a permission escalation directed to a human approver or to another authorized agent. These escalation requests must be self-contained and interpretable out of context, supporting asynchronous review and audit.
Prompt Injection as a Presentation Attack
One of the more important parallels we drew in our comments concerns prompt injection.
Biometric systems have long dealt with presentation attacks, such as adversarial inputs designed to fool sensors into accepting spoofed identities. The field has developed formal frameworks for detecting and mitigating these attacks, codified in ISO/IEC 30107 (Presentation Attack Detection), evaluated through NIST's Face Recognition Vendor Test (FRVT) PAD program, and tested in operational settings through DHS's Remote Identity Validation Rally (RIVR).
In these systems, adversarial inputs are an expected condition. No single control is sufficient. Layered protections detect attacks and limit their downstream impact.
Prompt injection is the equivalent threat class for agentic systems, and we believe it should be treated with the same rigor. Even if an injection succeeds in manipulating an agent's reasoning, a well-implemented capability boundary limits what the compromised agent can actually do. Monitoring systems flag anomalous behavior and cryptographic attribution ensures every action traces back to a responsible party.
As we stated in our submission: systems must be designed such that, even when adversarial inputs are successful, the resulting actions are constrained, observable, and attributable. Capability-based authorization, cryptographic identity, and auditability serve as functional analogs to PAD (Presentation Attack Detection), bounding the scope of potential harm and enabling detection and response.
What We Recommended to NCCoE
We proposed that NCCoE build a demonstration architecture in which an AI agent is issued a cryptographic identity and verifiable credential-based capability grants, enabling it to access multiple enterprise systems across organizational boundaries, perform actions under delegated authority, enforce least privilege through credential-scoped permissions, and log all actions with cryptographic attribution.
The demonstration should compare traditional OAuth-based delegation against credential-based capability models, evaluating tradeoffs in scalability, security, and operational overhead.
This would move the conversation from theory to implementation, showing how existing standards like mDL/mdoc, W3C Verifiable Credentials, Decentralized Identifiers, and SD-JWT can be adapted for agent-native identity and authorization.
Why This Matters Now
The NIST concept paper reflects a shift in how the federal government views AI agents: not as extensions of existing automation, but as a new class of digital actor that requires identity governance comparable to what we apply to human users.
Organizations deploying AI agents today face real, immediate risks: data leakage, unauthorized actions in high-stakes environments, and unpredictable behavior that existing access controls were not designed to handle. The frameworks NIST builds next will shape how enterprises and agencies manage those risks.
SpruceID builds verifiable digital credential infrastructure designed for these challenges. Our libraries and credential platform services enable identity wallets for agents, credential-based access to regulated environments, and the binding of credentials to actions and authorization capabilities. We would welcome the opportunity to collaborate on demonstration projects in this space.
The full text of our comments is available upon request, please contact us if you’d like to learn more.
Building digital services that scale take the right foundation.
About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.