<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2849132&amp;fmt=gif">
Elisity Blog

Why Zero Trust Requires Microsegmentation: A Former Forrester Analyst Weighs In

RSAC 2026 Conference at Moscone South where former Forrester analyst David Holmes discussed zero trust microsegmentation
David Holmes shared his research-backed perspective on zero trust microsegmentation at RSAC 2026 in San Francisco.

Most enterprises know they need microsegmentation. Most have tried it. And most walked away from stalled or failed projects. The gap between intent and execution has defined this market for years, and it is not shrinking on its own. At RSAC 2026, we sat down with someone who watched that failure cycle from the inside: David Holmes, former Forrester principal analyst and author of The Forrester Wave: Microsegmentation Solutions, Q3 2024.

What Holmes shared in our conversation was blunt, research-backed, and ultimately encouraging. The technology has changed. The approach has changed. And the organizations that walked away from microsegmentation may need to reconsider.

Who is David Holmes, and why does his perspective matter?

David Holmes spent five years as a principal analyst at Forrester Research, where he covered Zero Trust, SASE, microsegmentation, and network security architecture. During that tenure, he authored The Forrester Wave: Microsegmentation Solutions, Q3 2024, the most comprehensive public evaluation of the microsegmentation market. He spoke with dozens of enterprise security leaders at Global 2000 organizations, examining what worked, what failed, and why. Before Forrester, Holmes held technical and security roles at F5 Networks and Shape Security, building deep expertise in application security and network defense. He is now CTO for Application Security at Imperva, a Thales company.

In short, Holmes evaluated every major microsegmentation vendor on the market, interviewed the customers who deployed those solutions, and published the findings. His perspective bridges the gap between vendor claims and operational reality.

Watch the full conversation

Key data points from our conversation and supporting research:

Why most microsegmentation implementations failed

Holmes did not sugarcoat what he observed during his years evaluating the microsegmentation market at Forrester. "Literally most of the implementations failed or stalled," he told us. This was not a sampling problem. Holmes spoke with dozens of enterprise clients, across industries, who had attempted microsegmentation and walked away from it.

The reasons formed a consistent pattern. First-generation microsegmentation approaches demanded that organizations map every data flow, install agents on every endpoint, or restructure network architecture using VLANs and ACLs. These projects measured timelines in years, not months. They consumed enormous operational resources. And they frequently collapsed under their own complexity before delivering any real security improvement.

This experience was widespread, not isolated. The Forrester Wave: Microsegmentation Solutions, Q3 2024 described the broader market reality directly: microsegmentation projects were historically "prone to failure, usually due to complexity." Most organizations still rely on VLANs and ACLs as their primary segmentation approach — tools that lack the granularity needed for true microsegmentation. Few have successfully moved beyond those legacy methods.

The result was a generation of security leaders who internalized a simple conclusion: microsegmentation is theoretically important but practically impossible. Holmes watched this conclusion take hold across the market, and his final act at Forrester was to challenge it.

What changed: the identity-based evolution

The core shift, according to Holmes, is in how policies are defined and enforced. Traditional microsegmentation required organizations to understand and map every communication flow between assets before writing rules. That approach breaks at scale, especially in environments with thousands of IoT, OT, and medical devices that communicate in ways no one has fully documented.

Holmes described a different model: horizontal policies anchored to identity. Rather than mapping every flow between every asset, you define policies based on what a device is and what it should be allowed to do. His example was straightforward: "What if we had a horizontal policy that said video cameras could only speak to the cloud?" That single policy, applied by device identity rather than IP address, eliminates an entire category of lateral movement risk without requiring anyone to map individual traffic flows.

This is what Holmes meant when he described the Zero Trust principle at the heart of microsegmentation: "replacing implicit trust with explicit policy." The evolution from network-centric to identity-based microsegmentation changes both the technical approach and the operational burden.

Traditional vs. identity-based microsegmentation

Dimension Traditional / First-Gen Microsegmentation Identity-Based Microsegmentation
Policy anchor IP addresses, VLANs, subnets Device identity, function, and context
Agent requirement Agents required on every endpoint Agentless; enforced at the network edge
Deployment timeline Months to years Weeks
IoT/OT coverage Poor; most IoT/OT devices cannot run agents Comprehensive; identity-based policies cover all device types
Policy maintenance Manual rule updates as network changes Policies follow identity; adapt as assets move or change
Failure mode Stalls under complexity before reaching production Incremental rollout; policies can deliver value from day one

When we asked Holmes whether the claim of "microsegmentation in weeks, not years" is realistic, his answer was direct: "Yes, because of the new techniques." The combination of identity-based policy, agentless enforcement, and the ability to use existing network infrastructure eliminates the deployment barriers that defined first-generation projects.

Why Zero Trust requires microsegmentation

"If you want to be zero trust, you have to do microsegmentation," Holmes told us. "You can't get around it."

His reasoning comes down to architecture. Most organizations have invested heavily in Zero Trust Network Access (ZTNA) to control how users connect to applications. ZTNA handles what security architects call north-south traffic: the flow between external users and internal resources. It verifies identity, checks device posture, and enforces least-privilege access at the point of connection.

But ZTNA was not designed to address east-west traffic: the lateral communication between devices, workloads, and systems inside the network. Once an attacker gains a foothold through a compromised endpoint, a vulnerable IoT device, or a stolen credential, ZTNA cannot prevent them from moving laterally to higher-value targets. This is where microsegmentation becomes essential. It creates granular boundaries between assets based on identity and policy, containing threats even after initial access is achieved.

Holmes put it plainly: "Zero Trust Network Access really only works best as a protection mechanism if it's combined with microsegmentation."

The regulatory and standards landscape supports this architectural argument. CISA's July 2025 microsegmentation guidance reinforced this connection, stating that microsegmentation is "foundational" to Zero Trust and that static VLANs are no longer sufficient. NIST SP 800-207 includes microsegmentation as a core architectural pattern for Zero Trust implementations, and the CISA Zero Trust Maturity Model requires progressively granular segmentation at higher maturity levels.

Government agencies are leading on this front. Holmes noted that federal agencies are "actually proceeding down the road of doing zero trust" in response to executive mandates and compliance requirements. For enterprises still treating microsegmentation as optional, the regulatory direction is clear.

Take another look: what to evaluate now

The last research report Holmes published before leaving Forrester carried a specific message to the market: "I told people, take another look at microsegmentation. There's new techniques now, new ways to do it that might be more tailored toward your specific environment."

This was not a vague endorsement. Holmes had evaluated every major vendor, reviewed customer outcomes, and published his findings in the Wave. He watched the technology evolve from the complexity-laden first generation to a new class of solutions built on identity, context, and existing infrastructure. His recommendation to revisit microsegmentation was grounded in that evidence.

For organizations ready to take that second look, Holmes's evaluation criteria suggest several questions worth asking:

  • Does the solution require agents? If it does, it will not cover IoT, OT, medical devices, or legacy systems. Agentless approaches that enforce policy at the network edge can protect every device type without software installation.
  • How are policies defined? Identity-based policies (anchored to what a device is, not where it sits on the network) are simpler to write, easier to maintain, and more resilient to network changes than IP-based rules.
  • Can it use your existing infrastructure? Solutions that leverage your current switches and access points avoid the cost and disruption of hardware replacement. This was a key differentiator in the Forrester evaluation.
  • What is the realistic deployment timeline? Ask for customer references with timelines measured in weeks. If the vendor cannot provide them, that tells you something.
  • Does it address both IT and OT/IoT? A solution that only covers managed IT endpoints leaves the fastest-growing portion of your attack surface unprotected. Convergence across IT, OT, and IoT on a single platform is critical.
  • How does it integrate with your existing security stack? Microsegmentation should enrich and enforce policy using intelligence from your asset discovery, endpoint protection, and identity platforms, not operate in isolation.

The Forrester Wave: Microsegmentation Solutions, Q3 2024 noted that the technology has matured to the point where "these once-failure-prone projects may actually work this time." The market has grown accordingly: analysts project the global microsegmentation market to expand from $8.2 billion in 2025 to over $41 billion by 2034, according to Exactitude Consultancy.

Frequently Asked Questions

How does microsegmentation support a zero trust network?

Microsegmentation enforces the principle of least-privilege access at the network level by creating granular security boundaries between individual assets, workloads, and device groups. In a zero trust architecture, no device or user is implicitly trusted, and microsegmentation provides the enforcement mechanism that contains threats within tightly defined segments. While ZTNA controls user-to-application access (north-south traffic), microsegmentation governs device-to-device and workload-to-workload communication (east-west traffic). Together, they deliver the layered protection that zero trust requires. Both NIST SP 800-207 and the CISA Zero Trust Maturity Model include microsegmentation as a foundational component of zero trust architecture.

Why do microsegmentation implementations fail?

Most failed microsegmentation projects share a common root cause: complexity. First-generation approaches required organizations to map every data flow across the network, install agents on every endpoint, and manually write thousands of IP-based rules before a single policy could be enforced. These projects stretched into multi-year timelines, consumed significant operational resources, and often stalled before reaching production. Most organizations still rely on VLANs and ACLs for segmentation, both of which lack the granularity and adaptability needed for microsegmentation at scale. Modern identity-based approaches address these failure modes by anchoring policies to device identity rather than network constructs, enabling incremental deployment that delivers security value from day one.

Can you implement microsegmentation without agents?

Yes. Agentless microsegmentation has become one of the most significant advances in the market over the past several years. These solutions enforce policy at the network edge, using existing switches and access points as enforcement points rather than requiring software agents on every endpoint. This is particularly important for IoT, OT, and medical devices that cannot run agents due to operating system limitations, regulatory constraints, or patient safety concerns. Very few organizations have successfully deployed agent-based microsegmentation, underscoring the practical barriers of that approach. Agentless, identity-based solutions can protect every device type on the network, including legacy systems, headless devices, and unmanaged assets.

What is the difference between ZTNA and microsegmentation?

ZTNA (Zero Trust Network Access) and microsegmentation address different dimensions of network security. ZTNA controls how users and devices connect to applications, typically managing north-south traffic (external users accessing internal resources). It verifies identity and device posture before granting access. Microsegmentation controls east-west traffic (lateral communication between devices, workloads, and systems inside the network). It creates granular security boundaries that prevent an attacker who has gained initial access from moving laterally to other systems. As David Holmes noted in our conversation, "ZTNA really only works best as a protection mechanism if it's combined with microsegmentation." The two technologies are complementary, not interchangeable: ZTNA secures the perimeter of access, and microsegmentation secures the interior of the network.

The technology caught up to the promise

David Holmes watched microsegmentation fail across the enterprise landscape during his years at Forrester. He evaluated every major vendor, spoke with the customers who struggled through those deployments, and published his findings in the industry's definitive assessment. Then, in his final report before leaving Forrester, he told security leaders to take another look.

That recommendation was not optimism. It was based on a measurable shift in how microsegmentation works: identity-based policies, agentless enforcement, deployment on existing infrastructure, and timelines measured in weeks. The approach that failed most organizations three to seven years ago is not the approach available today.

For security leaders who shelved microsegmentation after a failed or stalled project, the math is different now. Zero Trust mandates from NIST and CISA now require it. Lateral movement remains the primary mechanism attackers use to escalate from initial access to full compromise. And the technology, finally, has caught up to the promise. The question is no longer whether microsegmentation belongs in your security architecture. It is whether you can afford to wait any longer to get it right.

To see how identity-based microsegmentation works in practice, explore Elisity's approach or request a personalized demo.

Further reading

About the Author
William Toll is Head of Product Marketing at Elisity, where he focuses on identity-based microsegmentation, Zero Trust architecture, and securing converged IT/OT/IoT environments. William writes about cybersecurity strategy, market trends, and practical guidance for security leaders navigating the shift to modern network security. Connect with him on LinkedIn.

No Comments Yet

Let us know what you think