1. Help Center
  2. Elisity Virtual Edge

Elisity Virtual Edge VM Deployment Guide (Hypervisor Hosted)

Elisity Virtual Edge VM (Hypervisor Hosted) is a docker container-based implementation of Elisity Cognitive Trust software running as a VM on your hypervisor of choice.

 

As of today, you can onboard all Cisco Catalyst 3850/3650, and catalyst 9000 series switches as Virtual Edge Nodes for policy enforcement using Elisity Virtual Edge VM. Cisco StackWise© switch stacking technology is also supported. Additional switch models will be supported in future releases. Please see the switch compatibility matrix for more details. 

 

TIP:

Elisity Virtual Edge is based on a docker container architecture. This means you can deploy it on pretty much any host that supports docker container hosting. For example, you could deploy this on your own private cloud docker infrastructure! 

The following example is leveraging an Elisity provided pre-packaged Ubuntu Linux OS that is hosting the docker container. 

NOTE:

  • Catalyst series switches require a minimum of IPBase licensing to be onboarded as Virtual Edge Nodes. 
  • The Elisity Virtual Edge VM has been developed to work with Catalyst 3850/3650 series switches running IOS-XE version 16.12.5b and Catalyst 9000 series switches running IOS-XE version 17.6.x. While it may work with earlier versions of IOS-XE we cannot guarantee that it will operate correctly.
  • All switches being onboarded must have their clocks synchronized with the Active Directory server so that attachment events are displayed accurately. You can use your own NTP server or a public one such as time.google.com. 

 

The following chart describes the terminology used in this document

Cloud Control Center

Elisity's cloud native and cloud delivered control, policy and management plane.

Virtual Edge VM

The Elisity Cognitive Trust software running as a docker container on a hypervisor such as VMware ESXi.

Virtual Edge Node

An access switch onboarded to a Virtual Edge to be leveraged as an enforcement point in the network.

Deploying Elisity Virtual Edge VM (Hypervisor Hosted)

The Elisity Virtual Edge VM container has a single virtual interface used to communicate with Cloud Control Center as well as with Virtual Edge Nodes. In more detail, the Virtual Edge VM virtual interface is used to maintain a persistent control plane connection to Cloud Control Center in order to receive identity based policies as well as to send identity metadata and analytics to Cloud Control Center. This same interface is used to glean identity metadata, traffic analytics and other switch information from the Virtual Edge Nodes and to read the Catalyst configuration and configure security policies, traffic filters and other switch functions. 

Elisity Virtual Edge VM allows you to onboard any type of switch on the compatibility matrix as Virtual Edge Nodes for policy enforcement. The Virtual Edge VM model is depicted below:

(Click to enlarge)

NOTE:

The minimum requirements to run Virtual Edge VM on a hypervisor are:

  • 2 vCPU @ 2 Ghz
  • 8 GB RAM
  • 32 GB Storage
  • 1 x Virtual Network Adapter (underlying host vnic should support 10 Gbps)
  • Less than 100ms RTT to Virtual Edge Nodes


Step 1:
To deploy Elisity Virtual Edge VM on a hypervisor you will need to acquire the Virtual Edge VM OVA file from your Elisity SE. In this example we will be using VMware ESXi. Once you have the OVA log into your ESXi instance and select Create / Register VM.

(Click to enlarge)


Step 2: Select Deploy a Virtual Machine from an OVF or OVA file and then select Next.

(Click to enlarge)

Step 3: Enter the name for the virtual machine and upload the OVA and select Next.

(Click to enlarge)


Step 4: Select the VM Datastore you wish to use as persistent storage for the VM and select Next.

(Click to enlarge)

 

Step 5: Select the Uplink Port Group that provides the correct access for the Virtual Edge VM to reach the internet as well as the access switches to be onboarded as Virtual Edge Nodes for policy enforcement. Select the Disk Provisioning option of your choice and ensure Power on automatically is enabled. 
 

(Click to enlarge)


Step 6:
If everything looks good select Finish and wait for the OVA to complete the deployment.
 

(Click to enlarge)

(Click to enlarge)


Make sure to enable Autostart so that the Virtual Edge VM starts up automatically after ESXi boots up.

(Click to enlarge)

Step 7: Once the deployment is complete we need to log into the Virtual Edge VM operating system to configure its IP address and deploy the rest of the software in later steps. Select Console and then select Open Console in new window.

(Click to enlarge)

Step 8: Log into the Virtual Edge VM operating system using the credentials provided to you by your Elisity SE. 

(Click to enlarge)

 

Step 9: By default DHCP is enabled. If static settings are required run the following command to configure a static IP, default gateway and DNS settings. Replace the example IPs with your own:

sudo docker-edgectl static ens192 10.60.1.11/24 10.60.1.1 "8.8.8.8,4.2.2.2"

 

NOTE:

A second IP address in the same subnet will be required when enabling the container within the operating system.  

 

Step 10: Verify that the new configuration was applied by running the following command:

 

ifconfig ens192

<OUTPUT>

ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.60.1.11  netmask 255.255.255.0  broadcast 10.60.1.255
        inet6 fe80::20c:29ff:fe5e:ff31  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:5e:ff:31  txqueuelen 1000  (Ethernet)
        RX packets 5681  bytes 916010 (916.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4600  bytes 1655465 (1.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

Test to make sure you can ping the default gateway as well as the internet. 


(Click to enlarge)

 


Step 11:
Log into Cloud Control Center and navigate to Policy Fabric > Elisity Edge > Add Edge

 (Click to enlarge)

 

Step 12: Select the Virtual Edge tile

 (Click to enlarge)

 

Step 13: Fill out the required fields and select Submit & Generate Configuration. Details about each field are provided in the chart below. These details can always be viewed and edited by selecting the more options icon to the right and selecting Edit/Download Virtual Edge Configuration. 

 (Click to enlarge)


The following chart provides details about each required field

Uplink IP Address

This is the IP assigned to the Virtual Edge VM container. This IP needs to be routable and must have access to reach Cloud Control Center. This IP also needs reachability to any Virtual Edge Node management interface you plan to onboard. The network for this IP can be configured locally on the application hosting switch or it can be configured on an aggregation switch upstream. This can be a new network or an existing network. This is NOT the same IP configured on the Virtual Edge VM Operating system during a previous step however it must be in the same network. This field is mandatory. 

Uplink Gateway IP

This is the default gateway IP for the network described above. This field is mandatory.

Uplink VLAN

This field is not used for Virtual Edge VM Deployments however it is still mandatory. Use any VLAN you wish. 

Host Name

This is the host name assigned to the Virtual Edge VM container. This field is mandatory.

Domain Name Server (DNS)

This is the DNS server IP to be used by the Virtual Edge VM container. This can be either a public or private DNS server. To specify more than one DNS server use a comma. This field is mandatory. 

Virtual Edge Location Address

The location of the Virtual Edge VM container so that Cloud Control Center reflects the location of the installed container. This field is optional. 

 

Step 14: After clicking Submit & Generate Configuration, two files will be automatically downloaded to your workstation. 

  • VE_xxxxxxxxxxxxxxxx.txt

This text file contains information to bring up Virtual Edge when hosted by a switch using application hosting functionality. It is not relevant to the Virtual Edge VM hosted by a hypervisor model. More details on this file are provided in the Elisity Virtual Edge (switch hosted) deployment guide

  • VE_DOCKER_xxxxxxxxxxxxxxxx.yml

The YAML file is what we need to focus on. This YAML file contains all of the details the Virtual Edge VM needs to deploy the container on the operating system. Each Virtual Edge VM receives a unique identifier which is embedded in the file name. Below is an example of the content in the YAML file generated by CCC. 

version: '2'
services:
  ve:
    networks:
      vlan1:
        ipv4_address: 10.60.1.12
    cap_add:
      - ALL
    environment:
      - EDGE_TYPE=VE
      - EE_CFG_JSON={"ve_dns_server":["4.2.2.2","8.8.8.8"],"ve_reg_key":"8dc081a5010d967a","ve_cloud_manage_url":"latest-tls.elisity.net","ve_uplink_ip":"10.60.1.12"}
    entrypoint: /etc/init.d/edge
    # Change the image tag version appropriately instead of 14.2.0
    image: elisity/docker_edge-build:14.2.0
    restart: always
    hostname: VE
    container_name: VE
    stdin_open: true
    tty: true
    privileged: true

networks:
  vlan1:
    driver: ipvlan
    driver_opts:
      parent: ens192
    ipam:
      config:
        - subnet: 10.60.1.0/24
          gateway: 10.60.1.1


Step 15:
Edit line 14 that says image: elisity/docker_edge-build:14.2.0 to reflect the OVA release you are deploying. For example if you are deploying a release named DOCKER_EDGE_ESXI-0.27-v14.2.16.ova then change the string to image: elisity/docker_edge-build:14.2.16

NOTE:

Line 26 parent: ens192 does not usually need to be changed. However, if your interface ID on the Virtual Edge VM operating system is different, adjust this to reflect the correct name. You can verify this by running ifconfig -a command on Terminal. 



Step 16:
Transfer the YAML file to the Virtual Edge VM operating system /home/elisity directory, and run the following command from the same directory to deploy the container. Make sure to use the file name generated by Cloud Control Center, not the example one below. When prompted for a password, use the same password you used to log into the Virtual Edge VM operating system. 

 

sudo upgrade-edge create VE_DOCKER_xxxxxxxxxxxxxxxx.yml


After a couple seconds the container will be created and the following output will be displayed

 

Creating VE ... done
VE successfully created !


Run the following command to make sure the container is running properly

docker ps


An output similar to the one below should be displayed:

 

 (Click to enlarge)

 

Step 17: Check Cloud Control Center to ensure that the Virtual Edge VM registered successfully. If the Virtual Edge VM status never changes to green then there is an IP connectivity issue between the Virtual Edge VM container and Cloud Control Center. 

 (Click to enlarge)

 

Onboarding a Virtual Edge Node

Step 1: Make sure the access switches you wish to onboard with the newly deployed Virtual Edge VM have the following commands configured.

On Catalyst 3850/3650:
=================
ip routing
ip http secure-server
restconf
netconf-yang cisco-ia auto-sync disabled
no netconf-yang cisco-ia intelligent-sync
 
On Catalyst 9000:
=================
ip routing
ip http secure-server
restconf


Step 2:
Log into Cloud Control Center and navigate to Policy Fabric > Elisity Edge. Next to the Virtual Edge you want to use to onboard your access switch and make it a Virtual Edge Node for policy enforcement, select the more options icon to the right and then select Add Virtual Edge Node. In this example we will be onboarding a Catalyst 3850 access switch.

(Click to enlarge)

 

Step 3: Fill out the required fields and select Submit. Details about each field are provided in the chart below. These details can always be viewed and edited by selecting the more options icon to the right and selecting Edit Virtual Edge Node Configuration. 

 

(Click to enlarge)

 

The following chart provides details about each required field

Switch Management IP

This is the management IP of the switch you wish to onboard as a Virtual Edge Node for policy enforcement. This can be an IP as long as it is reachable by the previously deployed Virtual Edge VM container. This field is mandatory

Switch Admin Username

This is the admin username of the switch you wish to onboard as a Virtual Edge Node for policy enforcement. This can either be local or TACACS/RADIUS. Privilege 15 is required. This field is mandatory. 

Switch Admin Password

This is the admin password of the switch you wish to onboard as a Virtual Edge Node for policy enforcement. This can either be local or TACACS/RADIUS.
Privilege 15 is required. This field is mandatory.

Virtual Edge Node Location Address

The location of the Virtual Edge Node so that Cloud Control Center reflects the location of the onboarded switch. This field is optional. 


Step 4: Refresh the page and select the expand icon next to the Virtual Edge VM until the circle next to the Virtual Edge Node name goes from grey with a status of Discovered to green with a status of Registered. This can take several minutes. If the status never changes then there is an IP connectivity issue between the Virtual Edge VM and the switch you are trying to onboard as a Virtual Edge Node. 

(Click to enlarge)

 

You can select the Virtual Edge Node name to see more details about the switch you just onboarded. 

 

(Click to enlarge)


Step 5:
Enable Device Track. The Device Track feature enables the Virtual Edge Node to glean additional user, application, and device information via Cisco IP Device Tracking technology. By default, this feature is disabled. It is recommended to enable this feature after onboarding a Virtual Edge Node.

 

(Click to enlarge)

The Virtual Edge VM will dynamically configure the Virtual Edge Node with the appropriate IOS-XE configuration for the Virtual Edge VM to glean user, device, and application identity and behavior. Existing and new Elisity Cognitive Trust policies will be pushed to the appropriate Virtual Edge Node immediately after onboarding.

Decommissioning and Deleting a Virtual Edge VM


Step 1:
Select the more options icon to the right of the Virtual Edge and then select Decommission Virtual Edge

 

NOTE:

Before you can decommission a Virtual Edge VM, all Virtual Edge Nodes onboarded with that Virtual Edge VM must first be decommissioned and deleted.  

 

(Click to enlarge)


Step 2:
Wait 60 seconds after decommissioning the Virtual Edge VM. Select the more options icon to the right of the Virtual Edge VM and then select Delete Virtual Edge. Refer to the previous image. 

Decommissioning and Deleting a Virtual Edge Node

Step 1: Select the more options icon to the right of the Virtual Edge Node and then select Decommission Virtual Edge Node. The Virtual Edge Node status will say Decommissioned.

 

(Click to enlarge)


Step 2:
Wait 60 seconds after decommissioning the Virtual Edge Node. Select the more options icon to the right of the Virtual Edge Node and then select Delete Virtual Edge Node. Refer to the previous image.