SecurityBrief India - Technology news for CISOs & cybersecurity decision-makers
India
DigiCert launches AI trust tools for agents & content

DigiCert launches AI trust tools for agents & content

Wed, 6th May 2026 (Today)
Sean Mitchell
SEAN MITCHELL Publisher

DigiCert has introduced an AI trust architecture to secure AI agents, models and digital content, adding new features to its DigiCert ONE platform.

The framework is intended to give organisations cryptographic ways to verify AI systems and the material they produce. A content authenticity product is available immediately, while separate tools for AI agent trust and AI model trust are in preview.

The launch comes as companies face growing questions over how to verify autonomous software, protect AI models from tampering and determine whether digital media is genuine. DigiCert's approach centres on a single trust layer that can be applied across the AI lifecycle, from model development and deployment to the use of agents and the publication of content.

Under the update, Content Trust Manager is available now. It lets organisations sign and verify digital content using the C2PA standard, creating a record designed to show where content came from, how it was created and whether it has been altered.

The tool is aimed at limiting misinformation, brand impersonation and fraud linked to AI-generated media. The same approach can also be extended to the point of capture through cryptographic signing and timestamping on trusted devices.

That device-level function is delivered through DigiCert Device Trust Manager. Imaging device manufacturers can embed C2PA certificates into products such as cameras, microscopes and scanners, allowing content to be signed and timestamped at source.

Agent controls

For AI agents, DigiCert's preview offering covers discovery, identity, governance and lifecycle management. It is designed to let organisations authenticate, authorise and audit autonomous systems by issuing cryptographic identities and applying policy-based controls.

DigiCert described AI agents as a new digital workforce operating across enterprise systems. The controls are intended to make actions attributable and subject to security and compliance rules.

Model checks

A separate preview product for AI models focuses on secure packaging, signing and runtime validation. It is designed to establish a verifiable chain of custody for models from development through deployment.

That would allow organisations to check that models have not been altered, are running in trusted environments and are handling sensitive data securely, including when infrastructure is distributed or operated by third parties.

DigiCert framed the launch as a response to what it sees as a gap in existing AI oversight. Many organisations still rely on fragmented, manual processes rather than continuous verification of identity, integrity and provenance.

"AI has created a new trust challenge," said Amit Sinha, Chief Executive Officer, DigiCert. "Organisations are relying on agents, models, and content they can't always verify. At DigiCert, our purpose is to give people confidence in the security, privacy, and authenticity of their digital interactions. With our AI Trust solution, we help organizations confirm what's real, secure, and approved so AI can be used with confidence."

DigiCert's content authenticity push aligns with broader industry work around C2PA, a technical standard backed by groups including Adobe, Microsoft and Google. The announcement places that standard alongside DigiCert's existing trust and certificate management products as part of a wider AI governance strategy.

Research firms and security providers have increasingly focused on provenance and verification as generative AI tools make it easier to create convincing text, images and other media at scale. That has raised concerns among businesses about false attribution, unauthorised brand use and the integrity of media used in regulated or sensitive settings.

DigiCert also pointed to risks around AI models themselves, including supply chain exposure and intellectual property concerns. In enterprise environments, autonomous agents present a separate challenge because they can move across systems and perform actions without direct human intervention.

Jennifer Glenn, who covers security and trust research at IDC, linked those issues to a broader rethink in corporate AI controls.

"AI is forcing organizations to rethink trust from the ground up," said Glenn. "Bringing cryptographic assurance to AI systems gives enterprises the ability to independently verify identity, integrity, and provenance of content, enabling these organizations to build trustworthy AI at scale."

DigiCert said its platform is used by more than 100,000 organisations, including 90% of the Fortune 500.