Technology

Gen AI Creates New Attack Vectors, Here’s How We Step Up the Fight

Alongside the many novel and potentially positive uses of Generative AI that have come rushing to the surface of popular consciousness at breakneck speed comes a series of existential threats to cybersecurity that have not been widely discussed yet. Having built a wide variety of ML and AI systems over the years, including several within the realm of cybersecurity, I would like to share my thoughts on some of these threats with you, so that your teams and companies can take proactive evasive actions before it’s too late.

As I stare at the blinking prompt of this word processor and gather my thoughts about the topic, the temptation to switch over to the GPT interface is very real. Not because it has barged its way into the forefront of mainstream culture faster than the internet did in the early 90s, nor because I don’t have enough to say about it –– in fact, I probably have too much to say (I’ve been thinking about it for a long time, since long before I wrote this article in Venture Beat 5 years ago…). There is just an engineering curiosity about what this machine might stitch together on the topic of GenAI + Security that is very tempting to explore. What if the machine notices how we, as its operators and guide, are so concerned about the machine’s impact on security? What if it might start behaving differently over time as it learns more about our own opinions and begins to adapt? While the idea might sound like it was pulled right out of The Terminator franchise, it is actually a much older question that sits right at the intersection of technology and philosophy.

Asimov’s Third Law of Robotics:

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The explosion of Generative AI in the last few months will continue to have wide implications for humans, societies, and industries, and if we are really honest with ourselves, we don’t really have all the answers — just many, many new open questions.

One question that it will probably not ask itself as claims of its amazing implications for cybersecurity rush to the press, is how will Generative AI create new risks, new vulnerabilities, and make the battle to secure our software and the world that depends on it that much harder, pretty much overnight?

Sometimes humans must ask themselves the right questions, so let's get started, shall we?

Cyber attacks are just a prompt away

Experts and seasoned veterans in the cybersecurity space have always known they are entrenched in an endless game of cat and mouse. Threat actors will always try to find ways to break the system and the systemic protectors keep raising the bar, so cybersecurity has evolved over time to a certain degree, keeping old systems that have been hardened over the years relatively secure, given the amount of constant threats they continue to endure. In theory, this works okay for a “castle” (or monolithic application stack) that has stood its ground for centuries (or decades). But, with our modern dependence on software and the cloud-native paradigm shift happening within these software systems, including the explosion of APIs, Microservices, ML adoption, stream processing, you name it — there is a lot for systems to keep up with already, too much given that 90% of cloud-native engineering leaders reported at least one security incident in their Kubernetes environments last year.

Enter GenAI, and we are quickly sucked into an even more dangerous environment that includes threats that are not only driven by the evil genius of expert human actors, but by a machine-grade summary analysis of the entirety of human knowledge on the topic. The access to simple prompts on how to attack these turbulent and underprotected systems is ripe to wreak havoc, and the ongoing advances in GenAI are becoming increasingly accessible by the day.

Of course, there are going to be a new segment of attackers who are just going to be able to find simple attack vectors through intelligent prompt engineering and finding common weaknesses within their targets. But advanced attackers can now embed this intelligence and API calls in their existing scripts and modules, to let AI discover what is really going on inside your environment. They will then be able to improvise on their attacks on the fly at a speed and complexity of permutations that humans and the systems we built (without AI) have no experience combatting.

This is going to result in an exponential increase in the quantity and quality of attacks happening on your environment, and needless to say, it will require different deterrence tactics. But what?

AI-powered phishing

As an industry, we continue to make many advancements against phishing attacks, but as we all know, at the end of the day, humans are one of the weakest links in that line of defense.

With intelligent machines getting trained on more and more private data, it is becoming so very easy to have your favorite GPT interface come up with extremely personalized phishing attacks.

And this is not going to stay limited to emails. Now that these machines can hook across multiple APIs and communication systems, it is increasingly easy to launch a multi-channel phishing attack. These phishing attacks are just the tip of the iceberg, depending on how open your systems show up, now the attackers can move laterally to attack your corporate as well as product assets.

The infamous attack that happened at Uber last year started as a phishing attack, which took considerable effort for the attacker to get in and then they were able to move laterally. Now, GPT is making it so much easier to create deeply personalized phishing attacks, which no matter how much preparation or training humans have, can always tap into our primal need for emotional and physical connection or security, especially when executed impeccably. Just this week, JPMorgan sent out an email to its newly acquired First Republic customers having to state explicitly that they will not call or email asking for personal information nor will they ask their customers to wire money via Zelle or any other service. Imagine the extent of how effective these phishing attacks can be when the phone numbers attach to support lines or chatbots that sound entirely real, or when they contain someone’s real account numbers already. We are just at the tip of the iceberg of what GPT can do, and while these consumer-facing examples may sound like something that wouldn’t happen to a seasoned technology-savvy engineer, social engineering attacks remain one of the top choices for nefarious actors for one simple reason - because they work. And they are about to work even better.

As I write about these new kinds of attacks, it is also worth sharing the ironic security incident that happened at OpenAI. Here’s what they say:

“We took ChatGPT offline earlier this week due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history. It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.”

A perfect segue into our next section:

Automatic Code Gen

If you’ve been tracking the security circuit for a while, whether directly or indirectly, you have likely been burnt by some vulnerability rooted in open source code. When the entire ecosystem contributes and uses the same repository, it's natural to trust the source that is usually built by some of the best experts in a given field. Unfortunately, when some exploits are found in such popular repositories, they have an outsized impact across many industries, and the attackers would leverage that as best as possible. When the HeartBleed bug hit the popular crypto library, it took many years of effort for companies to fully contain its impact.

And today, there are many code analysis tools available to scan your open source projects and application images – it has become a new practice on its own. But how quickly can we adapt to apply the lessons learned from the past to tackle the new challenges from AI-generated application code?

There are many discussions about IP-dilution, data access and exposure that are very important to address. But we don’t yet fully understand the scope and implications of what backdoors are opening up by using AI-generated code. The proponents of “productivity” would argue that “it’s all fine!”, but even without GenAI code, our static code analysis tools aren’t able to catch all security problems. How do we expect to then be more secure from source code that comes with dependencies and complexities that we don’t directly understand? Just wait for more adoption to grow and you can expect more abstractions to settle in and soon we’ll be in a state where we are simply passing along machine code for other machines to execute, why bother with “human readable” source code at all? That’s probably a tangent we can explore some other time.

The overarching point is, AI-generated code is not some panacea and in fact could be poisonous in ways that we don’t yet understand. For example, if code-gen algorithms can track known exploits within the code they generated, in the future, the same algorithms could be misused to generate an adaptive attack that won’t be easy to contain with traditional tools. As we refine this technology, we should deeply consider the implications on new attack vectors and securing business-critical assets.

Summary

There is no question that GenAI is going to be transformational, but as with any shiny object, we should not treat it as some silver bullet.

This somewhat takes you back a decade or so when the breakthroughs in Neural Networks suddenly became all the rage and the whole category of “AIOps” emerged.

And it’s fair: Our industry desperately needs more automation! The systems we depend on daily have reached a level of scale and sophistication that we cannot tackle manually. But, we cannot achieve automation by just throwing in some transformer models and AI-agents, which barely understand the data they are operating with. It’ll be a disaster.

While many “AI products” are in the news these days, even when cybersecurity companies are starting to hype up how “ChatGPT” is now embedded in their products, what’s more urgently needed is to take a pause and understand the technology and what new attack vectors are about to be unleashed.

As Sun Tzu says:

“If you know yourself but not the enemy, for every victory gained you will also suffer a defeat.”

GenAI can create novel attack patterns, which means that static tooling will become less effective. Now is the time that live application protection at runtime is the most essential. And the answer does not lie in creating some magical AI-powered “black box,” but to empower and augment our human intelligence with the security and defense tools that can take on the modern attackers at the same level of sophistication (or ideally, a level up!).

Operant is continuing to build out its massive vision to protect modern applications at runtime, powered by a deep understanding of how every layer of the application is operating alongside human-controlled security enforcements that protect the application from the inside out.

To learn more about what we’re doing and how it can protect your applications against new GenAI attack vectors, please reach out.