Public Comment on Docket NIST-2025-0035 · Submitted March 4, 2026
Security Considerations for Artificial Intelligence Agents
A 12-page public comment responding to the NIST CAISI Request for Information on AI agent security. Proposes six policy recommendations grounded in patent-pending Cryptographic Runtime Governance.
Key Recommendations to CAISI
Mandate sealed reference states for all autonomous AI deployments
Require continuous runtime measurement against sealed baselines
Adopt tiered verification levels (self-attested → portal-enforced → third-party verified)
Mandate offline verifiability for air-gapped and DDIL environments
Standardize artifact formats using existing cryptographic primitives
Require privacy-preserving disclosure for cross-boundary attestation
Topics Addressed
Threat Landscape
Analysis of runtime integrity threats including behavioral drift, policy circumvention, and post-hoc evidence fabrication in agentic AI systems.
Security Practices
Sealed policy artifacts, continuous integrity measurement, and signed enforcement receipts as foundational security practices.
Assessment & Measurement
Tiered verification framework progressing from self-attestation through portal enforcement to independent third-party verification.
Environment Controls
Portal architecture as a zero-trust enforcement boundary. Runtime governance for cloud, edge, and air-gapped deployments.
Additional Considerations
Privacy-preserving selective disclosure, FRAND licensing commitment, and alignment with existing NIST framework vocabulary.