Taking Back Control of Application Security by Rethinking Trust Boundaries for the L7+ World

A lot has changed in the world of applications. As data has become central to how customers derive value from products, delivery of applications has changed from boxes to as-a-Service and APIs. As products and teams scale, previous monoliths and VM based apps are giving way to the new world of microservices and containers. Applications are moving to the cloud where they end up using third party services and APIs - including cloud provider services and other third party APIs that they have far less control over than anything that used to be owned in-house.

All of these changes have implications for the domain of security - especially network security. Just like in the physical world, security as a problem can be distilled into safeguarding things of value like data and code within trusted boundaries, safe from untrusted actors. The problem is: how do we do that in today’s highly complex, dynamic, and untrustworthy world.

To Trust or Not to Trust: Back when the world was simpler

What can be trusted vs not trusted has traditionally been defined in terms of the network, either an IP address or port or a VLAN, or where data and code was deployed within the network. When applications were deployed within data centers and had a few monolithic components, security and networking teams knew exactly what needed to be secured and how to secure them. The securing mechanism involved putting everything that needed to be secured ‘locked’ behind a North-south firewall i.e, securing trusted services within a trusted network controlled by the security and networking teams, safe from untrusted clients outside the network. 

The first trigger point to change this model was with the arrival of virtualization when a monolith got decoupled into more tiers like Web, Application services, and DB tiers packed into VMs, which meant that network traffic flowed within internal components as well as processing external requests. In order to stop lateral movement through this internal network traffic, trust boundaries had to change. Firewalls followed the virtualized application tiers and were deployed between tiers as East-West firewalls. 

Increasing complexity and scale causes major traffic jams

As applications moved to the cloud, the first wave of this movement followed a lift and shift approach where Web, App and DB tiers were moved without any refactoring to the cloud, deployed within individual VPCs. Firewalls have followed this transition as they get deployed between VPCs and only allow traffic from ‘trusted’ networks and IP addresses or VLANs. Yet the limits of this architecture are easy to see as the apps and traffic that flows through them have scaled. East-west firewalls end up hairpinning traffic where there is an additional hop for the network traffic to get to the destination service causing performance and scalability issues. 

A New Trust Boundary

As apps evolve and get built natively in the cloud and with a microservices based, containerized architecture, the trust boundary is shifting making traditional firewalls obsolete. Networks within containerized platforms like Kubernetes are flat with constantly changing IP addresses, and having rules that allow or deny traffic between application containers in terms of IP addresses alone is infeasible. The definition of an entity that can be trusted needs to evolve in this new world beyond IP addresses to incorporate the identity of the app and end user itself. 

The trust boundary is shifting making traditional firewalls obsolete. The definition of an entity that can be trusted needs to evolve in this new world beyond IP addresses to incorporate the identity of the app itself.

Trusted communication used to be defined by VLAN based micro-segments within traditional networks in terms of which IP address can talk to what other IP address. In the new world of microservices and APIs, these trust relationships need to be defined in terms of which service can talk to what other service/API/data store, moving the point of enforcement for allow/deny rules from a central firewall to distributed points closer to the app itself. This is easier said than done. 

Operationalizing the new trust boundary at larger-than-life scale

Kubernetes clusters are known to have hundreds, if not thousands of pods and containers that talk to each other. The old world used to have a few known VMs/ servers that needed to be secured from known untrusted actors and static network based rules sufficed. 

In the new world, what needs to be secured is itself unknown

In the new world, what needs to be secured is itself unknown due to a number of very common factors, including:

  1. The scale of the microservices and APIs deployed
  2. Unknown internal APIs that provide access to sensitive data stores or unprotected legacy services
  3. Unknown external APIs invoked including third-party APIs calls
  4. Unknown external data stores used to store company and customer data. 

Further, defining what service can talk to which other service/API is challenging as the underlying data and traffic flow patterns constantly change. It is important to first build an inventory of all the different deployed microservices, APIs, data stores, and further understand how data flows between components and what data is being exchanged when creating trusted micro segments that prevent lateral flow of untrusted entities.

Treating Identity-Centric Microsegmentation as a data problem

How to define trusted micro-segments in this constantly changing new world has become a data problem, as all of the runtime network and identity telemetry at scale needs to be operationalized and curated into a list of allow/deny rules continuously at runtime, something that is impossible to do manually. 

Microsegmentation in the new world is also a multi-layered problem relative to the simpler set of segments that used to be defined in the old world such as Web tier can talk to App tier, but not the DB tier. As applications are built as microservices, APIs have become the main purveyor of how data is sourced and manipulated by end users (both external and internal). When defining a trusted segment, the entire chain of communication to say, charge a credit card, that traverses multiple layers of APIs and database queries including the end developer or app making the API call should be incorporated, embedding both user and app-level identity across the API and data flows.

It is also why policy as code is so important because having a few rules defined within a firewall or a few firewalls is different from having these rules defined for hundreds and thousands of microservices/ APIs and data stores that constantly change. Security teams need to adopt policy as code practices where access rules are treated as code, tracked for drift/changes and reviewed for misconfigurations with the ability to roll back, while being deployed in an app-aware manner where the policies adapt to changing application access patterns.

Beyond the data, how organizations can secure the fluid deployments of today

In the old world, deployment and delivery of applications was closely controlled by internal IT and networking teams. This meant that the applications were deployed according to network security practices defined by these teams. However, delivery of apps and access to infrastructure is more fluid in the modern world. 

Development and DevOps teams can create any number of VPCs in the cloud, deploy infrastructure like Kubernetes clusters, or invoke serverless functions and deploy apps on a weekly basis. Security teams are too often in catchup mode, where they lack basic visibility into what microservices and APIs are present, what is the set of users and developers using the clusters, what APIs they are calling, and what data is being accessed and manipulated.

Cloud native security needs a proactive approach that can keep pace with software development and deployments by creating organization-wide guardrails for API and data access in ways that are frictionless for all development teams. Policy as code development patterns should allow security policies to be deployed along with application code, allowing them to adapt to changes within the application in an automated manner, such as applying security best practices for each new API added without increasing work for application development teams. 


While the goal of deploying faster and more often may seem adversarial to the goal of making applications more secure, it doesn’t have to be. Indeed, the future of application security is in seamlessly bringing together new capabilities of visibility and enforcement across the network, API and data layers so that security and development teams have what they need to understand and secure their applications at runtime without being delayed by the traffic jams caused by old world solutions.