Share this
Andy Ellis on How to Prevent Lateral Movement in the Age of AI Agents
by William Toll on Apr 30, 2026 8:00:01 AM
According to the Verizon 2025 Data Breach Investigations Report, credential abuse is now the single most common initial access vector, used in 22 percent of breaches, and 88 percent of basic web application attacks involve stolen credentials. The IBM Cost of a Data Breach Report 2025 found that breaches involving compromised credentials took an average of 246 days to identify and contain. The shared signal across both reports is the same: once an adversary is inside, the question is no longer whether they will move laterally, but how far. At RSAC 2026, we sat down with someone who built a Zero Trust architecture for that question from the inside for a decade: Andy Ellis, former Chief Security Officer at Akamai for more than 20 years, principal at Duha, and author of How to CISO.
What Andy shared in our conversation was direct, architecture-grounded, and focused on what most Zero Trust programs still get wrong. The conversation kept returning to a single organizing idea: how to prevent lateral movement is fundamentally an architectural question, not a tooling question, and the architecture has to serve that objective, one incremental step at a time, including for the AI agents now arriving on every endpoint.
Who is Andy Ellis, and why his lateral movement perspective matters
Andy Ellis spent more than 20 years as Chief Security Officer at Akamai, where he was the founding architect of the company's TLS acceleration network and built Akamai's information security program from the ground up. He is the principal of Duha, a boutique consultancy that advises CISOs, vendors, and venture investors on leadership, security strategy, and product messaging, and the author of How to CISO, an ongoing reference library of volumes, handbooks, and talks for current and aspiring CISOs. Andy serves on the boards of several security companies, including Orca Security, Cyside, Grip Security, and Hunters, is an Operating Partner at YL Ventures, and is an affiliate at the Berkman Klein Center for Internet and Society at Harvard University. Before his industry career, he served in the United States Air Force as Chief of Command and Control Test Management and as an Information Warfare Engineer.
In short, Andy has built Zero Trust architecture from the inside at internet scale, advised dozens of security companies on how to get Zero Trust right, and written the book on what the CISO role actually requires. His perspective bridges the architecture decisions that shape modern internet security and the governance decisions a CISO has to make on a Tuesday morning. When we asked him how to prevent lateral movement at the architectural level, the answer that followed was direct and practical.
Watch the full conversation
- Credential abuse is now the most common initial access vector, used in 22 percent of breaches, and 88 percent of basic web application attacks involve stolen credentials (Verizon 2025 Data Breach Investigations Report).
- The average breach involving compromised credentials took 246 days to identify and contain in 2025, against an overall mean lifecycle of 241 days (IBM Cost of a Data Breach Report 2025).
- Lateral movement and east-west traffic restriction is named as a recommended mitigation in both the Verizon DBIR 2025 and the CISA Zero Trust Maturity Model v2.0, which treats network microsegmentation as a core capability of the Networks pillar.
- Zero Trust architecture, as defined in NIST SP 800-207, treats every access request as untrusted by default, regardless of network location, and requires continuous verification across all identities, including non-human and machine identities.
How to prevent lateral movement: an architecture problem, not a breach-assumption problem
The opening question in our conversation was simple: how are organizations struggling to prevent lateral movement? Andy was direct, and the direction was architectural. The "assume breach" principle that anchors most Zero Trust messaging is correct, he argued, but it gets misread as permission to plan only for detection and recovery, when the architecture also has to plan for stopping movement.
As Andy put it: "Preventing lateral movement is an objective. And if you don't implement the right architectures that drive that objective, you're not going to succeed." He pushed past the assume-breach reflex directly: "The reality of lateral movement is you have to plan on stopping breaches. Not every single one, but you have to say, look, if somebody compromises the machine, I don't want them to be able to move laterally."
The architectural reality underneath Andy's point is concrete. Most lateral movement does not exploit a novel vulnerability. It rides on administrative agents the security team itself deployed, service accounts with too many privileges, and east-west paths that exist because the network was built for connectivity first and segmentation later. Andy made the implication explicit: the safest machine in most enterprises, in his words, would be one walked down to the Apple Store, locked down, and given to a user with no admin, because nobody would be worried about being compromised by their own tools. Our strategic guide to understanding and preventing lateral movement walks through the same architectural framing in detail.
Andy then extended the argument to the identity that is changing fastest. "They're going to make everything worse, because our whole model has been moving to single-user systems," he told us. "We all started with multi-user systems, and those were a disaster to try to configure. It became very easy when we locked it down and said there's one user on a machine, and very few places deviate from that. Except now you have one human user on a machine and you might have a thousand agents on that same machine. How do you separate between who's doing what?"
The operational implication for AI agent security is direct. The shift from one human identity per machine to one human plus many agents per machine is the identity separation problem of the next five years, and the architecture has to handle it natively. Treating an AI agent as a distinct identity class from a service account or a human user is the design decision that determines whether lateral movement can be contained when the agent is compromised.
Why zero trust messaging fails, and the reframe that makes it ship
When we asked Andy what is blocking progress in zero trust and operationalizing it at scale, his answer was not technical. It was about messaging. The pitch that the industry has been making to internal audiences, he argued, is anti-organizational by construction, and that is why it stalls inside enterprises that genuinely want to move.
As Andy put it:
"The people pushing zero trust are pushing a message that fundamentally says, 'What I'm trying to do is protect my network from you, the end user, or from you, the developer.' It comes across as a very anti-organizational message. 'I don't trust you.' If instead we walked in and said, 'I don't trust my administrative machines. I don't trust anything I've put on the network, and I don't want you to trust them. Your computer should not trust anything else, and vice versa,' that's not the message they're sending."
The reframe is not cosmetic. It changes who the policy is protecting whom from, and that change determines whether the developer, the clinical engineer, or the manufacturing operator becomes a partner or an obstacle. NIST SP 800-207 describes Zero Trust as a posture in which trust is never granted implicitly to any asset, network location, or identity. That language is neutral on intent, but in execution it has been carried into organizations as a distrust of users, when the more accurate read of the architecture is a distrust of the tools, agents, and connectivity the security team itself put in place.
Andy's second answer to what blocks progress was scope. Most organizations, in his view, think too big and skip the incremental wins that make a multi-year program credible inside the business. He used Akamai's own Zero Trust journey as the example.
As Andy described it:
"When I did zero trust, we were one of the first companies to do it. It took us ten years. But it was not ten years where we saw all the benefit in year ten. We were seeing steps along the way, some of which we got rid of. We implemented 802.1X on every wired network drop in the company. That was foolish, but it gave us the infrastructure to deploy X.509 certs onto every machine, which we then later used for our identity proofing on the network."
The architectural lesson, in Andy's framing, is that 802.1X was not the answer; it was the scaffolding for the answer. The X.509 certificates the program deployed against the 802.1X infrastructure became the basis for identity proofing later. For Zero Trust leads who are still being asked whether to fight the multi-year battle to install or replace 802.1X, this is a useful piece of context. Many enterprises have already paid that bill once and gotten little for it, and the most pragmatic modern alternatives skip the 802.1X dependency entirely. (For teams sizing the difference between a legacy NAC project and a faster identity-based path, our legacy NAC alternative comparison walks through the trade-offs.) The point is the incremental discipline. Pick the next architectural step that gives the program a real win, even if the win is just the foundation for the win after that, and keep moving.
Enclaving the unpatchable: the VPN-and-proxy pattern for legacy devices
The piece of Andy's argument that lands hardest in industrial and healthcare environments is his framing for legacy devices that cannot be updated. We described what most CISOs in these verticals are already living with: 20-year-old Windows manufacturing PCs that the supplier will not let you update, and medical devices that cannot be patched without a full FDA review cycle. These devices are on the network, and removing them is not on the table. Andy's answer was an architectural pattern, not a tooling choice.
Andy described the model this way: "I've always been a huge fan of a VPN-style model where we say, look, if we have devices that cannot trust the network, then why do you actually have it on the network? It's okay that it's internet-capable, but there's no reason it's publicly addressable, even to other assets inside your environment."
The enclaving pattern, he continued, combines two controls. "Mentally we say VPN, but what we're really saying is we want to put some box around it that's both doing network segmentation and probably some form of proxy as well. Because what makes that legacy Windows box a danger is not it intrinsically. It's the fact that somebody can talk to it who shouldn't."
That is the operational model many healthcare and manufacturing organizations are now building toward without using the word VPN. The control plane is identity-based microsegmentation that scopes what each enclaved device can reach and which identities can reach it, paired with a proxy or policy enforcement point that mediates the actual traffic. The architectural value, as Andy framed it, is that the danger he described, the fact that someone who should not be able to talk to the device can, gets removed without re-platforming the device. Identity-based microsegmentation is the modern enforcement model that makes this enclaving practical at scale across unpatchable Windows manufacturing PCs and FDA-locked medical devices.
AI, policy simulation, and the practice-before-policy rule
The final stretch of our conversation moved from architecture to operations. AI is the operational dimension. Andy was at a panel earlier in the day where every speaker had described AI as a speed advantage. He pushed back on that framing.
"It's not actually about speed," Andy told us. "It's about the ability to manage complexity. We've all known that developer in our organization we call the 10x developer. Fundamentally, what that means is that person holds all of the context in their head for the world they have to operate in. ... In the same way, in network management, if you don't know everything happening on your network, you can't safely make a change. But an agent can actually understand everything on the network and tell you the harm of any possible change."
The translation for security architects is direct. AI in network policy management is not a faster click-path. It is an attempt to give a single operator the same comprehensive context a "10x" engineer would carry mentally across the entire environment, so that the impact of every proposed segmentation change is understood before the change is enforced. That capability is what makes policy simulation across longer windows operationally feasible, and Andy made the case for the longer window directly.
As he put it: "Especially months, because one of the biggest worries people have is, if I look at my network traffic for two weeks, did I capture the monthly backups? Did I capture the quarterly software update, or the push of our financials report out to the investors? There are so many things that don't happen on this fast timeline. It's reasonable to be worried that somebody who says, I'm just going to sample network traffic for two weeks and make my decisions based on that, that's scary."
That observation lines up with the reality of every Zero Trust program at scale. Two-week traffic samples miss the monthly close, the quarterly investor push, the annual disaster recovery test, and the once-a-year clinical or manufacturing event that, if blocked by a freshly enforced policy, becomes a board-visible incident. Policy simulation across months gives the program room to find those patterns before they become exceptions to a policy that has already shipped. Which led Andy to the closing framing of the interview:
"Policy is the last thing you do. Policy is when you encode your practices. The first thing you should do is not say, 'Well, my policy is going to be these devices don't talk to these devices.' You say, 'What is my practice?' Oh, these devices don't talk to each other. Great. Let's make that a policy so we don't have to worry about it. But if you say, 'Oh, it's my policy,' and then there's an exception and now I can't do it, that's where you run into trouble."
For CISOs, security architects, and Zero Trust leads, the sequencing rule is concrete. Discover the practice first. Simulate the policy that would encode the practice across a long enough window to capture the rare-but-real flows. Only then enforce. The order matters because the alternative, a policy declared in advance and then exception-managed in production, is the failure mode every legacy segmentation project ran into. That sequencing — discover, simulate, enforce — is how to prevent lateral movement at scale: not as a single tool decision, but as a sequenced architectural discipline.
Frequently Asked Questions About Lateral Movement Prevention
How do you prevent lateral movement after a breach?
Preventing lateral movement after a breach is an architectural objective, not a single tool. It requires limiting the east-west paths an adversary can use, removing or scoping the administrative agents and service accounts that most lateral movement actually rides on, and enforcing identity-based policy at the network layer so that a compromised endpoint cannot reach resources its identity does not need. The CISA Zero Trust Maturity Model v2.0 treats network microsegmentation as a core capability of the Networks pillar, and the Verizon 2025 DBIR identifies segmenting networks to restrict lateral movement post-compromise as a recommended mitigation. The architectural goal Andy described in our conversation is to ensure that, even if an endpoint is compromised, the adversary cannot reach the next asset.
What is a practical zero trust implementation roadmap for a global enterprise?
A practical zero trust implementation is a multi-year program of incremental wins, not a single project. Andy's account of Akamai's own ten-year journey is instructive: the organization deployed 802.1X on every wired drop, used that infrastructure to issue X.509 certificates to every machine, then used those certificates as the basis for identity proofing on the network. The lesson is to pick the next architectural step that delivers a real win and that creates the foundation for the step after it. NIST SP 800-207 provides the canonical reference architecture, and the CISA Zero Trust Maturity Model v2.0 provides the roadmap structure across five pillars: identity, devices, networks, applications and workloads, and data.
How can AI detect lateral movement across segmented networks?
AI in network security is most useful when it is being asked to manage complexity, not to add speed. As Andy framed it, the value of an agent is that it can hold the context of everything happening on the network simultaneously, in the way a 10x engineer would hold the context of an entire codebase. Applied to lateral movement detection, that means an AI-assisted control plane can reason about east-west traffic patterns over months rather than two-week samples, identify the rare flows that monthly close, quarterly reporting, and annual disaster recovery tests produce, and simulate the impact of a proposed segmentation policy before it is enforced. That is the basis for non-disruptive policy deployment at scale.
How do you protect legacy systems without re-platforming?
Andy's answer is the VPN-and-proxy enclaving pattern. If a device cannot trust the network, the architectural question is why it is reachable on the network in the first place. The enclaving model combines network segmentation, which scopes which identities can reach the device, with a proxy or policy enforcement point that mediates the traffic. The danger of the legacy device is not the device itself, in Andy's framing, but the fact that someone who should not be able to talk to it can. Identity-based microsegmentation enforces the same enclaving pattern at scale across unpatchable Windows manufacturing PCs and FDA-locked medical devices, applying least-privilege policy to each device based on its identity rather than its network location.
Zero trust is a ten-year discipline, not a ten-quarter project
The through-line across Andy's conversation was that lateral movement prevention is an architectural objective, and the architecture has to be built deliberately, in incremental steps, on top of an honest read of where the trust currently sits. The trust to remove is the implicit trust the network grants to its own administrative agents, service accounts, and east-west paths, not a fresh distrust of the people doing the work. The identities to separate are no longer just human and service account; they are human, service account, and the thousand AI agents that may be operating on the same machine as the human user by next quarter. The devices to enclave are the legacy and FDA-locked systems that cannot be updated, scoped by what their identity needs to reach rather than where they live on the network.
For security leaders, the operational takeaways are concrete. Treat lateral movement prevention as an architectural objective, not a side benefit of a Zero Trust deck. Reframe the messaging from distrust of users to distrust of tools. Pick the next incremental win that creates the foundation for the win after it. Enclave the devices you cannot update. Separate human, service account, and AI agent identity as a design principle, not a downstream configuration problem. Simulate policy across months, not weeks, before you enforce it.
For leaders designing the identity layer Andy described, explore how identity-based microsegmentation enforces human and non-human identity as one policy fabric. As Andy put it, policy is the last thing you do. Practice comes first. Identity is where that practice lives. And that, in his framing, is how to prevent lateral movement once an endpoint is already compromised.
Further reading
- Why Zero Trust requires microsegmentation: a former Forrester analyst weighs in
- Understanding and preventing lateral movement: a strategic guide for enterprise security leaders
- HIMSS medical device security survey: 2026 key findings
- Medical device security 2026: from vulnerability to exploitability with Claroty and MultiCare
- The golden age of microsegmentation in healthcare
- Top healthcare cybersecurity vendors for 2026
- Healthcare IT and microsegmentation: a Main Line Health case study
- AI agent network security and microsegmentation in 2026
For more architectural depth on how to prevent lateral movement at enterprise scale, including the identity-based microsegmentation patterns Andy described for legacy and AI agent environments, explore the Elisity cybersecurity blog.
William Toll is VP of Product Marketing at Elisity, where he focuses on identity-based microsegmentation, Zero Trust architecture, and securing converged IT, OT, and IoT environments. William writes about cybersecurity strategy, market trends, and practical guidance for security leaders driving the shift to modern network security. Connect with him on LinkedIn.
Share this
- April 2026 (9)
- March 2026 (6)
- February 2026 (14)
- January 2026 (4)
- December 2025 (4)
- November 2025 (2)
- October 2025 (5)
- September 2025 (4)
- August 2025 (5)
- July 2025 (5)
- June 2025 (5)
- May 2025 (4)
- April 2025 (5)
- March 2025 (6)
- February 2025 (3)
- January 2025 (5)
- December 2024 (4)
- November 2024 (5)
- October 2024 (7)
- September 2024 (5)
- August 2024 (3)
- July 2024 (4)
- June 2024 (2)
- April 2024 (3)
- March 2024 (2)
- February 2024 (1)
- January 2024 (3)
- December 2023 (1)
- November 2023 (1)
- October 2023 (2)
- September 2023 (3)
- June 2023 (1)
- May 2023 (3)
- April 2023 (1)
- March 2023 (6)
- February 2023 (4)
- January 2023 (3)
- December 2022 (7)
- November 2022 (3)
- October 2022 (1)
- July 2022 (1)
- May 2022 (1)
- February 2022 (1)
- November 2021 (1)
- August 2021 (1)
- May 2021 (2)
- April 2021 (2)
- March 2021 (3)
- February 2021 (1)
- November 2020 (2)
- October 2020 (1)
- September 2020 (1)
- August 2020 (3)

No Comments Yet
Let us know what you think