Company

​​​​Operant’s Full Circle Approach to AI & Security

You might have heard that AI is a thing now, right? I mean, it is getting a lot of attention, and as any engineer now knows, Attention is All You Need.

But in the mad rush to incorporate not just GenAI, but basically any sort of predictive modeling that could ever be considered in the loosest sense to be AI, into every product from toothbrushes to customer support chatbots, the security of those features hasn’t really been top of mind. It’s understandable in this era of innovation at hyperspeed that security isn’t the number one priority, but as the shine wears off of the new features, teams will soon be faced with all sorts of implications and fallout from the extreme speed with which they were adopted.

A speedy integration of AI into a cloud-native application environment, whether through APIs like Open AI’s or via open source GPT models that are hastily scrubbed and popped into a cluster with fingers crossed, introduces critical security threats to the entire application environment. It also exacerbates open gaps that already exist within Kubernetes - such as over-permissioning of identities and a lack of traffic and access management between services and APIs - adding new sources of unauthorized access and opportunities for data exfiltration and data poisoning that are far faster and more dangerous now that GenAI can so readily be used to inundate systems with malicious access attempts at a scale never seen before in the history of technology.

At Operant, we have always embraced the two-sided coin that AI presents - Unprecedented innovation on one side and unprecedented risk on the other. In a way, it’s a familiar landscape, the wild wild west that our CTO, Priyanka and I started tackling together almost a decade ago. At the time, mobile applications malware was a massive problem for Android, where new malicious actors were misusing customer data and the underlying hardware, creating new attack vectors that hadn’t been seen before (sound familiar?). Back then, we built deep operating system instrumentation and on-device ML models that would deter and protect customers from shapeshifting malware. The parallels between that era’s problem space and today’s fight against a rapidly changing AI enemy are a very useful starting point, yet since then, as AI and ML have advanced, so has our vision.

What we mostly see in the market today are a million variations of the easiest/most common GenAI manifestations with some security context strapped on for various forms of “security co-pilots” or small-scale switches like LLM-blocking browser plug-ins. When you dig deeper, the majority of these are reincarnations of task lists, chatbots or perhaps older policies being repurposed for a new use case. While the spectrum of AI security tooling is totally nascent at this early stage and will obviously evolve over time, at Operant, we have already been busy building the core product offering to solve some of the biggest problems at the intersection of AI and security for quite some time.

We see the intersection of AI and Security in a more holistic and comprehensive way, incorporating and addressing three important needs that impact every modern technology organization:

  1. Security For AI: How do we protect all the work and IP that is going into AI models and how do we protect the rest of the cloud-native stack from the new threats that the integration of these models and APIs introduce?
  2. Security Against AI: How do we harden cloud-native applications inside and out so that the increased attack surface and unprecedented scale of AI-based attacks doesn’t penetrate through the apps to the most precious assets?
  3. Security With AI: You need AI to fight AI, and that truth will only become more pronounced as AI adoption and innovation grows. We’re proud to combine innovations from academia and pragmatic business use cases to use AI in ways that actually work to protect and secure every layer of modern applications from kernel to APIs.

Security for AI

Protecting AI Apps and APIs

With the incredible speed of AI adoption and feature development across every industry from legal to fintech to healthcare, securing the AI that is making its way into these highly sensitive environments is a critical and urgent problem that so far has taken a backseat to development speed.

There is a common dynamic in which engineering and platform teams are reluctant to adopt new security tools due to bad experiences in the past (or present) with sec tooling that adds massive technical debt while hindering the speed of development. That unfortunate (yet understandable) dynamic is only exaggerated when it comes to the even faster expectations placed on AI integrations into their products and systems. We’ve heard from some exasperated engineers that every day feels like a race they’re losing. Yet, at some point very soon, the many critical threats that come along with insecure AI implementations are going to start costing businesses far more than they can afford.

So, the problem that we set out to solve is simple: how do we secure and isolate the AI presence in a cloud-native application stack without hindering development speed - in fact, while simultaneously making it go faster? That is where Operant’s automatic threat modeling and AI security controls come into play.

Operant is now able to identify and model AI-based attack vectors in real-time based on structured data from the live application, prioritize them based on criticality, and manage identity and access enforcements within the application internals, so that apps and services that rely on AI APIs for their business critical functionality can safely engage with these 3rd parties, while controlling the risk they pose to the rest of the application stack. You can think of it as an AI firewall that sits inside the application, managing every interaction the rest of the cluster has with the AI component, and actively blocking unwanted and nefarious attacks such as data exfiltration, data poisoning, model manipulation, and malicious injection.

But what about the companies that have already decided that AI APIs are too high risk, and have instead taken open source GPT models, scrubbed them, and then put them into use with their own training and prompts? Surely that must be secure enough, right? But that kind of static pre-prod scrubbing and code scanning is only as effective as our knowledge of specific malicious code is at the time the scrubbing is done. Sure, it catches some problems, but what about the ones we didn’t realize were a problem until later? That’s why adding runtime scanning to catch malicious behaviors needs to also be part of the full security picture. Only runtime scanning can catch and stop zero day vulns as they awaken, and with the use of GenAI to create infinite permutations of malicious injection attempts, instant runtime quarantines and remediations are the only way to effectively reduce the risks of using open source GPT models within your application environment.  

Zero-trust for AI

The integration of AI models and APIs into Kubernetes application environments that already lack internal identity and access controls is particularly dangerous. Even before the AI race that is happening now, attackers who entered Kubernetes application stacks through social engineering or other forms of credential theft, were already able to wreak havoc by exploiting the overpermissioning by default that is so common to Kubernetes architecture, to move laterally through open application internals to precious assets and PII data.

The same Zero Trust concepts around applying least privilege access controls across every service and API in the cluster in order to reduce lateral movement and shut down lateral attack threat vectors is only magnified in the context of AI additions to the environment. The reality is that most teams don’t fully understand the AI models they are importing or the API data and 3rd party identities that they are allowing through the WAF into their smushy application internals, so hardening the access controls between those components and the rest of the service environment is absolutely pivotal to securing AI.

While many teams have the impression that applying Zero Trust policies in a cloud-native context either doesn’t work (because IP-based rulesets don’t scale in ephemeral K8s) or is simply too much effort for an already overworked team to maintain, Operant’s policy recommendations based on real application data along with its drift-free enforcement of Zero Trust policies based on relevant cloud-native identities (such as API endpoints) makes applying Zero Trust in K8s - both to protect AI models and the PII data that is usually available through a few service hops -  extremely fast and simple without breaking your apps.

In the end, Zero Trust only works if it is actually reasonable to implement and maintain, and Operant’s focus on simplicity and use-case driven policies that are designed specifically for the K8s application environment enables security teams to achieve a new level of Zero Trust security without the eng hours or extra projects.

Security against AI

Fighting AI-powered Attacks

Like the broad set of cyber attacks that have been growing in the last year, AI-powered attacks are just exploding in volume and velocity across every domain. Attacks like phishing, DDoS, and credential stuffing are only increasing in scale and dynamism as AI is much faster at exploring the permutations and combinations of attack paths compared to previous manual approaches.

New AI attack vectors like deep fakes challenge the very notions of identity and authentication mechanisms that we take for granted as being real. Imagine if these deepfakes were extended to machine identities at scale in Kubernetes and cloud environments where they were introduced through the software supply chain. It would become impossible to distinguish between what is real and what is not. Malicious deep fake identities could lurk for months within cloud environments making use of over-permissioned roles and evasive techniques to perform all sorts of malicious actions from resource exhaustion to data exfiltration.

Operant's runtime protection shields applications with guardrails that protect Kubernetes clusters from the inside out at every layer from the kernel to identities to APIs. With our least privilege and quarantining enforcements distributed across each layer, Operant’s in-depth defensive protections actively deter AI based attacks even as they evolve faster than ever before.

Securing the AI Supply Chain

When building innovative AI applications, “supply chain” is not a concept you want to be worrying about. Ask your AI or data engineer about supply chain and they’ll give you a blank stare.  

The AI supply chain is going to become a bigger focus area for delivering new AI experiences. In our view, it’s a “producer <> consumer” scenario: whether you are producing (building or serving) AI based models or you are consuming or calling AI models, “AI Supply Chain” is a problem most security teams will have to think about as more and more AI-models, AI-based applications and 3rd party AI APIs make their way into the core business applications.

When the engineering teams build and deliver new AI models, the most popular vehicle these days is to do that through serializing models as a K8s service. What we are finding in the wild is that these models and underlying libraries contain many open doors and vulnerabilities to let attackers take over your application. With our Operant Maestro Runtime Security Engine, we are now also detecting that many popular AI training frameworks have critical vulnerabilities in them. This is creating major roadblocks in delivering new innovative solutions. This is also an area where Operant’s remediation and guardrails play a powerful role, where you can bring in runtime protection to create the shield against the risky attack vectors.

There are similar concerns on the model consumption side. The most common one we see is that teams often don’t have full insight and inventory on what APIs are out in the wild and which of their services and APIs are calling other API endpoints, especially the new “AI-powered” endpoints. Most commonly, teams need the right egress controls from their K8s applications, but often, you want to understand the details of the data and interactions going on between these APIs.

We are so excited to solve these AI supply chain challenges with our powerful runtime capabilities so that companies across the globe can safely innovate without compromising their own IP or customers.

Security With AI

You need to use AI to fight AI, and at Operant, we are extremely strategic about how and when we apply AI following principles of accuracy, usefulness, transparency, and safety.

As part of our mission to secure the modern world, we pride ourselves on being an example of using AI to solve critical problems in completely new ways, at the unique and vast scale required by the complexity of the modern cloud-native environment. At the same time, we carefully avoid “slapping on some AI” that we don’t believe is necessary in order to achieve some buzz-word brownie points in the very noisy world that is flooded with such loose claims and terminology.

Customers of Operant enjoy detailed demos and walkthroughs of how and where we are using AI in the product, along with clear examples of expected outcomes and proactive guardrails customized to the risks and use cases that their teams care about most. Our sales engineering team works closely with customer teams to make sure the product setup is done correctly and that the customer SOC and DevSecOps teams are empowered to take the reins - all within a single onboarding call.

Operant’s entire Full Circle Application Protection Platform and its Operant Maestro Runtime Protection Engine are available with a single-step zero instrumentation install and zero integrations. It takes less than 5 minutes to see what Operant can do for you in a sample staging cluster, and then, the sky's the limit!

To experience the power and simplicity for yourself, or to learn more about how Operant can secure your entire Kubernetes application environment from kernel to APIs, including your AI models and AI APIs, sign up for a free trial.

Operant is proud to be SOC 2 Type II Compliant and a contributing member of CNCF and the OWASP Foundation.