
Imperdiet faucibus ornare quis mus lorem a amet. Pulvinar diam lacinia diam semper ac dignissim tellus dolor purus in nibh pellentesque. Nisl luctus amet in ut ultricies orci faucibus sed euismod suspendisse cum eu massa. Facilisis suspendisse at morbi ut faucibus eget lacus quam nulla vel vestibulum sit vehicula. Nisi nullam sit viverra vitae. Sed consequat semper leo enim nunc.
Lacus sit dui posuere bibendum aliquet tempus. Amet pellentesque augue non lacus. Arcu tempor lectus elit ullamcorper nunc. Proin euismod ac pellentesque nec id convallis pellentesque semper. Convallis curabitur quam scelerisque cursus pharetra. Nam duis sagittis interdum odio nulla interdum aliquam at. Et varius tempor risus facilisi auctor malesuada diam. Sit viverra enim maecenas mi. Id augue non proin lectus consectetur odio consequat id vestibulum. Ipsum amet neque id augue cras auctor velit eget. Quisque scelerisque sit elit iaculis a.

Amet pellentesque augue non lacus. Arcu tempor lectus elit ullamcorper nunc. Proin euismod ac pellentesque nec id convallis pellentesque semper. Convallis curabitur quam scelerisque cursus pharetra. Nam duis sagittis interdum odio nulla interdum aliquam at. Et varius tempor risus facilisi auctor malesuada diam. Sit viverra enim maecenas mi. Id augue non proin lectus consectetur odio consequat id vestibulum. Ipsum amet neque id augue cras auctor velit eget.
Massa dui enim fermentum nunc purus viverra suspendisse risus tincidunt pulvinar a aliquam pharetra habitasse ullamcorper sed et egestas imperdiet nisi ultrices eget id. Mi non sed dictumst elementum varius lacus scelerisque et pellentesque at enim et leo. Tortor etiam amet tellus aliquet nunc eros ultrices nunc a ipsum orci integer ipsum a mus. Orci est tellus diam nec faucibus. Sociis pellentesque velit eget convallis pretium morbi vel.
Eget aliquam vivamus congue nam quam dui in. Condimentum proin eu urna eget pellentesque tortor. Gravida pellentesque dignissim nisi mollis magna venenatis adipiscing natoque urna tincidunt eleifend id. Sociis arcu viverra velit ut quam libero ultricies facilisis duis. Montes suscipit ut suscipit quam erat nunc mauris nunc enim. Vel et morbi ornare ullamcorper imperdiet.
This week, the cybersecurity world descends on San Francisco for RSAC — and for the first time in a while, that feels exactly right.
RSAC has been held in San Francisco for years, but for much of that time the location was incidental. The conference could have been anywhere. The city was a backdrop, not a context. The conversations happening inside Moscone felt largely separate from the ones happening across town in SoMa or Mission Bay.
That's no longer true. San Francisco is now the center of gravity for AI development, and AI has become the defining challenge for cybersecurity. The two worlds aren't just converging at a conference — they're converging in practice, in the infrastructure enterprises are running today. RSAC being in San Francisco in 2026 carries a weight it hasn't before. The urgency in the hallways this year is real, and it's pointed in a clear direction: how do we secure AI systems that are already in production, already autonomous, and already consequential?
That question is what brought us to launch the Operant AI Infrastructure Partnership Program this week.
Enterprise AI infrastructure has never been faster. Purpose-built silicon is delivering thousands of tokens per second. Agentic workflows run continuously and autonomously. Sub-second reasoning is a production requirement for organizations across healthcare, financial services, government, and enterprise SaaS.
But as the capabilities of AI infrastructure have grown, so has the attack surface. And the security controls most enterprises rely on weren't designed for this environment.
Not long ago, securing AI largely meant protecting training data or locking down an API endpoint. That's no longer sufficient.
Modern enterprise AI means autonomous agents making decisions in real time — with live access to sensitive tools, databases, and external services through MCP-connected systems. It means complex, multi-step agentic workflows executing at a speed and scale that makes after-the-fact auditing an inadequate safeguard.
The organizations running these systems need security that operates at the same layer and the same speed as the inference itself.
Our view at Operant AI has always been shaped by a straightforward principle: security needs to be built into the infrastructure, not layered on after the fact.
We've seen this pattern play out before — in mobile, in edge AI, in cloud. In each case, the organizations that got security right did so early, when the architecture was still being established. The inference layer is at that point now. It's where models interact with the world, where prompts become actions, where data moves through live systems. Getting security right here — at this layer, at this stage — matters.
That's the thinking behind the AI Infrastructure Partnership Program.
The partnership program is a technical integration initiative that embeds Operant's runtime defense capabilities directly into the inference infrastructure enterprises are already running. Working with leading AI infrastructure providers, we're making the following available as an integrated layer in the inference stack:
Our first infrastructure partner is Tenstorrent. As Aniket Saha, VP of Product Strategy at Tenstorrent, noted, as AI moves to always-on agents, infrastructure needs to be both performant and open by design. By integrating Operant's runtime defense directly into the Tenstorrent inference stack, joint customers get high-throughput performance alongside real-time visibility and compliance — without having to choose between them.
The properties that make agentic AI valuable — autonomy, broad tool access, continuous operation — are the same properties that introduce new categories of risk. A misconfigured agent with access to internal systems creates real exposure. Prompt injection attacks against MCP-connected workflows don't require sophisticated adversaries; they exploit structural access that's often poorly governed.
Enterprises need the ability to observe what their AI systems are doing as they run, and to enforce policy in real time. That's what Operant delivers, and through this program, we're making that capability a native part of the infrastructure our customers are building on.
The AI Infrastructure Partnership Program is open to inference platform providers, AI infrastructure companies, and MCP-compatible application vendors.
To learn more or explore a partnership, reach out at alliances@operant.ai or visit operant.ai.