Share this
The Microsegmentation Say-Do Gap: 99% Plan It, 9% Finish It, and Lateral Movement Keeps Winning
by William Toll on Apr 28, 2026 12:00:00 PM
The microsegmentation paradox every security leader is living through
According to the Omdia 2025 microsegmentation survey of 352 enterprise security decision-makers, 99% of organizations are either implementing or planning microsegmentation. Only 9% report that more than 80% of their critical systems are protected by it. In the same 12 months the survey covered, nearly half of respondents experienced a lateral movement attack.
Read those three numbers again. Near-universal intent. Single-digit completion. And an attack technique that keeps finding its way across enterprise networks despite a decade of segmentation investment.
If you're a CISO, security architect, or network security leader, this is the say-do gap you're living inside. The intent is there. The execution data tells a different story. And the attackers know it. Closing this gap requires identity-based microsegmentation — a Zero Trust microsegmentation approach that enforces policy on device and user identity, not network location.
The Omdia 2025 microsegmentation survey at a glance (N=352)
- 99% of enterprises are implementing or planning microsegmentation
- 9% report more than 80% of critical systems currently protected
- Nearly 1 in 2 experienced a lateral movement attack in the past 12 months
- 57% rank microsegmentation as their top initiative to stop lateral movement
- 69% want identity-based microsegmentation as a top feature
- 44% cite device visibility as a critical gap
Get the full survey data: Download the Omdia microsegmentation survey report — 352 enterprise security leaders, with full breakdowns by manufacturing and healthcare.
We commissioned the Omdia survey to understand why a control that most frameworks recommend, many boards ask about, and most security teams want to deploy is so hard to finish. The answers aren't about budget, conviction, or vendor shortage. They're about architecture. The way most organizations have tried to implement microsegmentation over the last decade simply doesn't scale to the environments that security teams are actually defending today.
Here's what makes the say-do gap particularly challenging. It isn't a planning failure. It's an execution wall that hits somewhere between the pilot and the full rollout, right as policy complexity compounds faster than operational capacity. For organizations managing 20,000+ connected endpoints across IT, OT, and IoT, that wall is where most programs stall.
This post walks through how we got here, why the paradox is structural rather than cultural, and what an identity-based approach to microsegmentation changes about the math.
A brief history of network segmentation
Segmentation isn't new. The idea that you should separate trusted zones from untrusted ones, and keep sensitive systems from talking to everything else, predates the internet. What has changed is the mechanism, and each generation of mechanism has carried forward limitations that compound over time.
Physical isolation and the air gap
The original segmentation control was a separate wire. Operational technology networks, secure enclaves, and classified environments lived on physically isolated infrastructure. This approach still exists in the most sensitive corners of industrial and government networks, but it scales poorly, prevents legitimate business integration, and erodes the moment remote access or shared services enter the picture. Dragos observed 1,693 ransomware incidents impacting industrial organizations in 2024, an 87% year-over-year increase, which is a reminder that even environments long assumed to be isolated are now routinely reachable.
Perimeter firewalls and zone segmentation
As routable networks expanded, firewalls became the default segmentation tool. You'd define a handful of zones (internal, DMZ, guest, partner) and write rules between them. This worked when the enterprise was small and assets were stable. It broke when the asset inventory grew faster than the policy team, and when east-west traffic, not north-south, became the dominant attack path. NIST SP 800-207 makes the point directly: a perimeter-centric model cannot enforce least privilege on traffic that never crosses the perimeter.
VLANs and ACLs
To segment inside the perimeter, teams turned to VLANs and access control lists. These are still the dominant methods the Omdia survey respondents report using today. They're also the methods most closely associated with operational pain. VLAN sprawl, ACL drift, and the brittleness of IP-based rules are the reason 59% of survey respondents say segmentation has caused business disruption at some point. Understanding this history is essential to any conversation about network segmentation best practices today.
Network access control (NAC)
NAC promised to fix the posture problem: authenticate the endpoint before it gets on the network, then place it in the right VLAN. For pure IT fleets with 802.1X-capable endpoints, NAC delivered partial value. For IoT, OT, and medical device fleets, NAC broke down. Devices that can't host supplicants, can't be reliably fingerprinted, or can't tolerate authentication timeouts get dumped into quarantine VLANs or bypassed entirely. Multi-year NAC deployments became common, and many never reached full enforcement. (We wrote about the operational profile of that transition in our legacy NAC alternative comparison.)
Every one of these approaches was reasonable for the problem it was designed to solve. None of them were designed for a world where a single enterprise runs tens of thousands of heterogeneous endpoints across IT, OT, IoMT, and IoT, and where the attacker's first move after initial access is almost always lateral.
The rise (and limits) of first-generation microsegmentation
By the mid-2010s, the industry had accepted that zone-based segmentation couldn't stop east-west movement. The response was microsegmentation: policies that govern workload-to-workload and device-to-device communication, not just zone-to-zone. Two architectural patterns emerged.
The first was agent-based microsegmentation. Install a lightweight agent on every workload, use it to label the workload, and enforce policy at the host. This worked well for data center servers and, eventually, cloud workloads. It worked poorly for anything that couldn't host an agent, which is most of the OT and IoT fleet. The Omdia survey confirms this pattern: fabric overlay and agent-based methods rank at the bottom of what respondents are using today, while VLANs, ACLs, and host-based firewalls remain dominant.
The second was software-defined networking (SDN) and fabric overlays. These approaches promised policy abstraction from the physical network, but required significant re-architecture and, in many cases, vendor lock-in to a specific switching fabric. Adoption remained concentrated in greenfield cloud environments and a handful of large data center modernizations.
Both patterns delivered real value in their target environments. Both left most enterprise endpoints untouched. And both carried forward the operational pattern that has become the signature of the say-do gap: long deployment cycles, heavy policy authoring burden, and a continuing dependence on IP addresses, VLANs, and network constructs to express intent. That last point is where the paradox takes root. You can't write a durable policy for a device whose identity keeps changing shape on the network.
The survey result that captures this most directly: 22% of respondents have hands-on experience with modern microsegmentation. Seventy-eight percent do not. A decade into the category, most security teams still haven't touched the architecture that was supposed to replace the one they're operationally frustrated by.
Why most microsegmentation projects stall before they stop lateral movement
When we looked at the detailed execution data in the Omdia survey, four stall patterns showed up repeatedly. None of them are about willpower. All of them are about the structural overhead of legacy microsegmentation architectures.
The time tax
Per segmentation change, respondents report roughly 18 hours on change control, 15 hours on troubleshooting, 13 hours on policy testing, and 13 hours on policy creation. Add those up and a single meaningful policy change consumes close to 60 hours of skilled engineering time. Multiply that by a rollout of thousands of policies across a large enterprise and the execution wall becomes visible in the math alone. This is why 59% of respondents report segmentation has caused business disruption: every change is an expensive, high-risk exercise, and the blast radius of a mistake is production downtime or user and device downtime.
The agent gap
Agent-based microsegmentation presumes you can install software on the thing you want to protect. That's reasonable for a general-purpose server. It's not reasonable for a CT scanner, a PLC, a building management controller, a networked printer, a security camera, or a barcode scanner in a warehouse. These devices can't host agents. They can't tolerate agents. Their vendors don't support agents. Any architecture that requires an agent as the enforcement point leaves this entire population outside the segmentation boundary. For most enterprises, that population is the majority of the endpoint count.
The IP-brittleness problem
Legacy microsegmentation policies are written against network constructs: source IP, destination IP, VLAN, subnet, port. Those constructs were never designed to express identity. When a device moves, gets a new DHCP lease, or lands behind a different switch, the policy either breaks or becomes dangerously permissive. Teams respond by writing broader rules to avoid outages, and the security value of the segmentation dilutes in real time. MITRE ATT&CK techniques like T1021 (Remote Services) thrive in exactly this kind of environment, where lateral connections look legitimate because the policy can't tell the difference between a valid workflow and a compromised endpoint using the same protocol.
The visibility gap
You can't protect what you can't see. Forty-four percent of Omdia respondents cite device visibility as a critical gap, and the operational consequence is straightforward: if your asset inventory is incomplete or stale, every segmentation policy you write is either too narrow (missing assets that should be covered) or too broad (granting access to assets you didn't know were there). Seventy percent of respondents agree that traditional segmentation is no longer sufficient. The visibility gap is a large part of why.
These aren't complaints, they're structural constraints that demand a different approach. The common thread across all four is that legacy microsegmentation architectures treat the network, not the device and the user, as the unit of policy. That choice is the root cause of the say-do gap.
Omdia research
Get the complete Omdia 2025 microsegmentation survey
352 enterprise security decision-makers. Full data across manufacturing and healthcare — deployment timelines, budget allocation, integration priorities, and the architectural shift closing the say-do gap.
Download the full report →How identity-based microsegmentation resolves the paradox
There's a newer architectural category emerging in the microsegmentation market, and it's the one the survey data points to most clearly. Sixty-nine percent of respondents want identity-based microsegmentation as a top feature. The label matters less than what it actually does, so it's worth describing the category directly.
Policy based on identity, not location
In an identity-based model, the policy is written against the identity of the device and, where relevant, the user. Whether the rule is "CT scanners in Radiology can talk to the PACS server" or "engineering workstations in Plant 3 can talk to the historian," identity-based policies survive DHCP changes, VLAN moves, and physical relocations, because they aren't tied to an IP address or a VLAN tag. Identity here is composite: a blend of signals from endpoint discovery, directory services, EDR telemetry, CMDB entries, network behavior, and vendor-specific classifiers. The more signals, the higher the confidence in the identity, and the more granular the policy you can safely write.
Agentless coverage across IT, OT, and IoT
Because the architecture doesn't require software on the endpoint, it can cover the devices that legacy microsegmentation left behind. That includes IoT, OT, IoMT, and any legacy asset that can't tolerate or host an agent. This is where the "agent and agentless microsegmentation" conversation resolves: agents still have a role for workloads that can host them, but they no longer serve as the gating constraint on coverage. Agentless microsegmentation extends the same policy model to every asset the network touches.
Enforcement on existing infrastructure
The second architectural change: enforcement happens on the switches and access points already in place. No new inline appliances, no forklift upgrade of the switching fabric, no parallel network to run during migration. Policy is pushed to the existing enforcement points and evaluated there, which is what makes "weeks, not years" a credible deployment timeline rather than a marketing claim. Sixty-two percent of survey respondents agree modern solutions are easier to deploy than they were five years ago, and this architectural pattern is a large reason why.
Cloud-managed policy, continuous verification
Policy authoring, simulation, and distribution happen in a cloud-delivered control plane. That's what makes it operationally feasible to move from "one policy change every two weeks" to "dozens of policy changes per day with full simulation before push." It's also what makes continuous verification possible, which aligns the architecture to the core principle of NIST SP 800-207 zero trust architecture: never trust, always verify, at every session.
Identity-based microsegmentation is a category, not a product. Elisity is one implementation of this model, and we designed it specifically for the operational constraints described above: agentless coverage across IT, IoT, OT, and IoMT; enforcement on existing switches; identity-based policy authored in a cloud control plane; deployment measured in weeks. For a deployment example, see how Main Line Health rolled out identity-based microsegmentation across a multi-hospital health system. The point of this section isn't to argue for a single vendor. It's to name the architectural shift that the survey data is describing, and to make clear that "zero trust architecture microsegmentation" is operationally achievable in a way it wasn't five years ago.
Legacy vs. modern microsegmentation: side-by-side comparison
The fastest way to see why the say-do gap exists is to compare the architectural properties of each approach directly. The table below summarizes how the four dominant segmentation architectures behave across the dimensions that matter most to an enterprise security team.
| Dimension | VLANs / ACLs | NAC (802.1X) | Agent-based microseg | Identity-based microseg (agentless) |
|---|---|---|---|---|
| Policy basis | IP, subnet, VLAN tag | Authenticated device + VLAN assignment | Workload label (via agent) | Device and user identity (composite) |
| Coverage across IT / OT / IoT | All assets, but coarse | Strong for IT, weak for OT/IoT | Servers and cloud workloads only | All assets, IT / OT / IoT / IoMT |
| Granularity | Zone-to-zone | VLAN-to-VLAN | Workload-to-workload | Device-to-device, user-to-resource |
| Deployment impact | Network re-architecture | Multi-year rollout, posture agents | Agent rollout to every workload | Runs on existing switches, weeks |
| Operational overhead | High (rule sprawl, drift) | High (supplicant failures, quarantine churn) | High (agent lifecycle, workload labeling) | Low (cloud-managed, simulated changes) |
| Lateral movement prevention | Limited, zone-level only | Limited to authenticated fleet | Strong for covered workloads | Strong across all enforced assets |
| Framework alignment (NIST CSF 2.0 / Zero Trust) | Partial (PR.IR-01) | Partial (PR.AA-01, PR.IR-01) | Strong (PR.AA-05, PR.IR-01) | Strong (PR.AA-01, PR.AA-05, PR.IR-01, DE.CM-01) |
Reading across the rows, the pattern is consistent. Each generation of segmentation improved on the one before it inside its own scope, but none of the pre-identity approaches can express device-level policy across the full heterogeneous asset population that enterprise networks carry today. That gap is what the 9% completion number is measuring.
A practical path forward for any enterprise
Closing the say-do gap is a program decision, not a procurement decision. The organizations moving from "planning microsegmentation" into "reporting greater than 80% critical-system coverage" are following a recognizable pattern. Four steps, in this order.
-
Close the visibility gap first. Forty-four percent of respondents name this as the critical gap, and it shows up upstream of everything else. You cannot write durable segmentation policy against an incomplete inventory. Start by consolidating the identity signals you already have (directory, EDR, CMDB, DHCP, switch telemetry, passive discovery) into a single asset view with confidence scores. If your current tooling can't do this natively, treat it as the first capability requirement, not a nice-to-have.
-
Evaluate identity-based approaches explicitly. Sixty-nine percent of respondents want identity-based microsegmentation as a top feature, which means your peer group is already on this path. Evaluate architectures against three questions: Does it require agents on the endpoint? Does it require new inline appliances? Can it enforce policy on the switches you already own? Any "yes" to the first two should trigger a hard conversation about deployment timeline and coverage limits.
-
Align the program to a zero trust framework. Sixty-eight percent of respondents are pursuing microsegmentation as part of a broader zero trust initiative, and 60% cite regulatory compliance as a driver. Anchor the program to NIST SP 800-207 and, where applicable, NIST CSF 2.0. Framework alignment is what gets the program through budget cycles, audit conversations, and the inevitable question of "why now?" from executive stakeholders. It also prevents the common failure mode of a microsegmentation project that gets stranded as a network team initiative rather than a security program.
-
Demand integration across the existing stack. The top integration requirements from the survey are SIEM (67%), EDR (54%), SOAR (49%), and identity (43%). A microsegmentation platform that can't share identity context with the rest of your security stack is a platform that will become a silo within 18 months. For teams driving IT, OT, and SOC alignment, integration is the mechanism that makes cross-domain response workable. (Our guide to IT-OT-IoMT and SOC team alignment walks through the operating model in more depth.)
The paradox the survey describes is not permanent. It's a function of architectural choices made when the enterprise looked different from what it does now. Ninety-nine percent intent and 9% completion is what happens when a universally desired control collides with an architecture that can't keep up. Closing the gap is how lateral movement stops winning.
Want the full data set?
Download the complete Omdia 2025 microsegmentation survey report for detailed findings across 352 enterprise security leaders — including vertical breakdowns for manufacturing and healthcare, integration priorities, deployment timelines, and the features buyers rank most important.
Get the Omdia microsegmentation data →Microsegmentation FAQ
What is microsegmentation?
Microsegmentation is a network security approach that enforces granular, policy-based access controls between individual devices, workloads, and users rather than between broad network zones. It limits lateral movement by ensuring each asset can only communicate with the specific resources its role requires. Modern microsegmentation expresses these policies in terms of identity (device type, user role, business function) rather than network location, which keeps policies durable as assets move across the network.
How does microsegmentation work?
Microsegmentation works by discovering and classifying every device and user on the network, assigning each one an identity, and then enforcing policies that define exactly which resources that identity is allowed to reach. Enforcement can happen in a host agent, an inline appliance, or (in the identity-based model) on existing switches and access points. Policies are continuously evaluated, so access is verified at each session rather than assumed from a one-time network placement.
What is the difference between microsegmentation and network segmentation?
Microsegmentation differs from network segmentation in granularity: network segmentation separates a network into a small number of broad zones (internal, DMZ, guest) using VLANs, subnets, or firewalls, while microsegmentation applies policy between individual devices, workloads, or users. Network segmentation limits where traffic can go; microsegmentation limits which specific assets can talk to which other specific assets. Microsegmentation is the control that stops lateral movement inside a zone that network segmentation has already defined.
How do you prevent lateral movement in an enterprise network?
Preventing lateral movement requires three capabilities working together: complete visibility into every device and user on the network, identity-based policies that enforce least privilege between assets, and continuous verification of those policies at every session. Traditional firewalls and VLANs alone can't do this because they operate on network location, not identity. Identity-based microsegmentation addresses all three, which is why 57% of Omdia survey respondents rank it as their top initiative to stop lateral movement.
Further reading
- Read the full Omdia 2025 microsegmentation survey
- Legacy NAC alternative: how identity-based microsegmentation changes the operating model
- The ultimate guide to IT, OT, IoMT, and SOC team alignment
- IEC 62443 in 2025: network segmentation requirements and changes
Infographic: The microsegmentation say-do gap at a glance
About the author
William Toll is Head of Product Marketing at Elisity, where he leads positioning and go-to-market strategy for the company's identity-based microsegmentation platform. He writes on zero trust architecture, network security modernization, and the operational realities of protecting IT, OT, and IoT at enterprise scale. Connect on LinkedIn.
Share this
- April 2026 (8)
- March 2026 (6)
- February 2026 (14)
- January 2026 (4)
- December 2025 (4)
- November 2025 (2)
- October 2025 (5)
- September 2025 (4)
- August 2025 (5)
- July 2025 (5)
- June 2025 (5)
- May 2025 (4)
- April 2025 (5)
- March 2025 (6)
- February 2025 (3)
- January 2025 (5)
- December 2024 (4)
- November 2024 (5)
- October 2024 (7)
- September 2024 (5)
- August 2024 (3)
- July 2024 (4)
- June 2024 (2)
- April 2024 (3)
- March 2024 (2)
- February 2024 (1)
- January 2024 (3)
- December 2023 (1)
- November 2023 (1)
- October 2023 (2)
- September 2023 (3)
- June 2023 (1)
- May 2023 (3)
- April 2023 (1)
- March 2023 (6)
- February 2023 (4)
- January 2023 (3)
- December 2022 (7)
- November 2022 (3)
- October 2022 (1)
- July 2022 (1)
- May 2022 (1)
- February 2022 (1)
- November 2021 (1)
- August 2021 (1)
- May 2021 (2)
- April 2021 (2)
- March 2021 (3)
- February 2021 (1)
- November 2020 (2)
- October 2020 (1)
- September 2020 (1)
- August 2020 (3)

No Comments Yet
Let us know what you think