Virtual Small Form Factor
You can monitor traffic at branch offices using the GigaVUE-FM installed in your Central Location or Headquarters. This feature uses the following components:
- G-TAP A Series 2
- GigaVUE-FM
- Host Hardware
- VMware ESXi Host
- Script
The G-TAP A Series 2 taps the traffic in the branch office to the host hardware in which the VMware ESXi host is installed. A GigaVUE‑FM must be installed and running in the central location or private data center or in the remote office in cloud. Ensure that the GigaVUE‑FM installed in the central location must be able to communicate with the branch office.
Keep in mind the following instructions when using this feature:
- You are responsible for purchasing and the installation of the physical host machine, Operating System and ESXi software layer.
- The host machine must be dedicated to support the Gigamon visibility solution.
- Number of hosts it can support for multi-host installation is limited to 15 host.
- V Series node must be launched with 3 vNICs: 1 management NIC and 2 data NICs.
Hardware Requirements
The hardware must have four Ethernet ports to avoid loops in the network traffic flow.
Four Port host:
- Two port ingress traffic into this host from G-TAP A Series 2
- One Ethernet port receives Management traffic.
- One Ethernet port egress traffic to the tools
The following table lists the minimum hardware requirements for the host:
Hardware Requirements | |
Disk Space |
256 GB |
RAM |
16 GB |
CPU |
8 vCPU |
Number of 1 G port (RJ45) |
4 |
Number of 10 G port (RJ45) |
2 |
Number of 10 G port (SFP+) |
2 |
Types of deployment
Single Host Installation
To deploy GigaVUE V Series Nodes, you can use a single host to connect to the GigaVUE-FM which is located in the central location.
Multi-host Installation
To deploy GigaVUE V Series Nodes, you can use multiple hosts, located in different locations to connect to GigaVUE-FM located in the central location.
Prerequisites:
- Python latest version 3.6 and above, should be installed on the host where the script is run.
- Download the image file for Virtual Small Form Factor as tar.gz and extract it to host where script is run.
- OVF tool should be installed on the host machine where script is run. Refer to Step 2: Install the OVF Tool for instruction on how to install OVF tool.
- Ensure the IP connectivity between VMware ESXi host and the host.
- OVF file and VMDK file should be present in same location.
- The GigaVUE-FM in the central location must be installed and configured in your Central Location.
- Monitoring domain (environment/connection) must be configured in the branch offices to monitor the traffic.
- Monitoring session and an Application Intelligence solution must be configured and the deployment should be initiated.
- The monitoring session must be configured with Ingress REP and Egress TEP. Refer to Create Raw Endpoint and Create Tunnel Endpoint for more detailed information on how to add REP and TEP to the monitoring session.
Step 1: Install VMware ESXi Host
Refer to Install ESXi Interactively topic in the VMware documentation for more detailed information on how to install a VMware ESXi host in your hardware.
Step 2: Install the OVF Tool
Refer to Installing VMware OVF Tool topic in VMware Documentation for step-by-step instruction on how to install the OVF tool.
Step 3: Deploy GigaVUE V Series Nodes using the Script
You can deploy the GigaVUE V Series Nodes using the script. You can download the script from the Gigamon Customer Portal. The script contains the following files:
- Input.json
- Esxi_host.json
- install.py
- multi_host_upgrade_script.py
Install the Virtual Machine on Single Host
Downloading the script and update the input.json with the following details:
- V Series Node configurations details like V Series Node name, Physical NICs, and Vswitch details
- GigaVUE-FM registration details like GigaVUE-FM's IP, Monitoring domain name, and the connection name under which the GigaVUE V Series Node must be deployed.
input.json
{
"esxi_host": [
{
"vswitch": [
{
"name": "<Ingress switch 1 name>",
"uplink-name": "<Ingress uplink 1>"
},
{
"name": "<Ingress switch 2 name>",
"uplink-name": "<Ingress uplink 2>"
},
{
"name": "<Mgmt network switch>",
"uplink-name": "<Mgmt nw uplink>"
},
{
"name": "<Egress switch name>",
"uplink-name": "<Egress uplink>"
}
],
"vm": {
"guest_name": "<Vseries node name>",
"disk_store": "<datastore name # optional>", - optional specify empty value otherwise
"Tool_port_IP": "<static Tool IP if needed>", - optional specify empty value otherwise
"Tool_port_GW": "<static Tool IP GW>", - optional specify empty value otherwise
"Tool_port_netmask": "<static Tool IP netmask>", - optional specify empty value otherwise
"Mgmt_port_IP": "<static Mgmt IP if needed>", - optional specify empty value otherwise
"Mgmt_port_GW": "<static Mgmt IP GW>", - optional specify empty value otherwise
"Mgmt_port_netmask": "<static Mgmt IP netmask>" - optional specify empty value otherwise
},
"fabric_node": {
"groupName": "<monitoring domain name>",
"subGroupName": "<connection name>",
"remoteIP": "<FM IP>",
"username":"<username for thirdparty registration>",
"password":"<password for thirdparty registration>"
}
}
]
}
Run the script using the following command
python install_script.py -j <input config file> -i <image file name> -e <esxi host IP> -u <esxi host username> -p <esxi password>
For Example:
python install_script.py -j <input.json> -i <vseries-node-file10.ovf> -e <1.1.1.1> -u <root> -p <1gigamon#> -v <vseries-node-virtual-mini_2.11.0>
After updating and launching the above file, respective VSS switches will be created in the VMware ESXi Host as follows:
- Ingress Switch 1 is connected to the first physical Ethernet port. A port group for ingress is created and it is attached to Ingress Switch 1.
- Ingress Switch 2 is connected to the third physical Ethernet port. A port group for ingress is created and it is attached to Ingress Switch 2.
- Management Network Switch is connected to the second physical Ethernet port. A port group for mgmt is created and it is attached to Management Network Switch.
- Egress Switch is connected to the fourth physical Ethernet port. A port group for egress is created and it is attached to Egress Switch.
Once the VSS Switches are configured, the V Series Node will be launched.
- GigaVUE V Series Node's data NIC is attached to Ingress Switch 1 for receiving the ingress traffic.
- GigaVUE V Series Node's Mgmt NIC is attached to Management Network Switch.
- When using four NIC host the GigaVUE V Series Node's Tunnel NIC is attached to Egress Switch for sending out the egress traffic.
GigaVUE V Series Node will be registered with the GigaVUE-FM in the central location.
Install the Virtual Machine on Multiple Hosts
Downloading the script and update the input.json with the following details:
- ESXi Host details like IP address, username, and password.
- V Series Node configurations details like V Series Node name, Physical NICs, and Vswitch details
- GigaVUE-FM registration details like GigaVUE-FM's IP, Monitoring domain name, and the connection name under which the GigaVUE V Series Node must be deployed.
input.json
{
"esxi_host": [
{
"vswitch": [
{
"name": "<Ingress switch 1 name>",
"uplink-name": "<Ingress uplink 1>"
},
{
"name": "<Ingress switch 2 name>",
"uplink-name": "<Ingress uplink 2>"
},
{
"name": "<Mgmt network switch>",
"uplink-name": "<Mgmt nw uplink>"
},
{
"name": "<Egress switch name>",
"uplink-name": "<Egress uplink>"
}
],
"vm": {
"guest_name": "<Vseries node name>",
"disk_store": "<datastore name # optional>", - optional specify empty value otherwise
"Tool_port_IP": "<static Tool IP if needed>", - optional specify empty value otherwise
"Tool_port_GW": "<static Tool IP GW>", - optional specify empty value otherwise
"Tool_port_netmask": "<static Tool IP netmask>", - optional specify empty value otherwise
"Mgmt_port_IP": "<static Mgmt IP if needed>", - optional specify empty value otherwise
"Mgmt_port_GW": "<static Mgmt IP GW>", - optional specify empty value otherwise
"Mgmt_port_netmask": "<static Mgmt IP netmask>" - optional specify empty value otherwise
},
"fabric_node": {
"groupName": "<monitoring domain name>",
"subGroupName": "<connection name>",
"remoteIP": "<FM IP>",
"username":"<username for thirdparty registration>",
"password":"<password for thirdparty registration>"
}
}
]
}
Update the esxi_host.json file with appropriate details.
{
"esxi_host": [
{
"esxi_hostname" : "<hostname or IP>",
"esxi_username" : "<username>",
"esxi_password" : "<password>",
"cfg_file" : "<host specific cfg json file>" - Optional, If not provided, cfg file from global will be taken
}
],
"image": "<ovf file for all hosts>",
"cfg_file": "<global cfg json file>"
}
Run the script using the following command to install GigaVUE V Series Node
python multi_host_upgrade_script.py -j <config file> --install
For Example:
python multi_host_upgrade_script.py -j esxi_host.json --install
After updating and launching the above files, respective VSS switches will be created in the VMware ESXi Host as mentioned in the above section.
Once the VSS Switches are configured, the V Series Node will be launched.
- GigaVUE V Series Node's data NIC is attached to Ingress Switch 1 for receiving the ingress traffic.
- GigaVUE V Series Node's Mgmt NIC is attached to Management Network Switch.
- When using four NIC host the GigaVUE V Series Node's Tunnel NIC is attached to Egress Switch for sending out the egress traffic.
GigaVUE V Series Node will be registered with the GigaVUE-FM in the central location.
Upgrade GigaVUE V Series Nodes using Script
You can upgrade your GigaVUE V Series Nodes by following the below mentioned steps:
Update the input.json file and esxi_host.json as mentioned in the Install the Virtual Machine on Multiple Hosts section and run the script using the following command. When using single host, add the ESXi host details of the single host to your esxi_host.json file.
python multi_host_upgrade_script.py -j <config file> --upgrade
For Example:
python multi_host_upgrade_script.py -j esxi_host.json --upgrade
View Summary
You can view the current status of your V Series Nodes, ESXi Host that are currently running or on standby.
Update the input.json file and esxi_host.json as mentioned in the Install the Virtual Machine on Multiple Hosts section and run the script using the following command. When using single host, add the ESXi host details of the single host to your esxi_host.json file.
python multi_host_upgrade_script.py -j <config file> --summary
For Example:
python multi_host_upgrade_script.py -j esxi_host.json --summary
Roll Back
If you wish to roll back to the previous version of GigaVUE V Series Node due to upgrade failure or if you face issue when deploying the newer version, follow the steps given below:
Update the input.json file and esxi_host.json as mentioned in the Install the Virtual Machine on Multiple Hosts section and run the script using the following command. In the command, enter the version to which you wish to roll back. When using single host, add the ESXi host details of the single host to your esxi_host.json file.
Note: You can roll back to the previous version of the GigaVUE V Series Node. For example, when using GigaVUE V Series Node version 6.1.00 you can only roll back to version 2.7.0. Refer to GigaVUE‑FM Version Compatibility Matrix for GigaVUE V Series Node version details.
python multi_host_upgrade_script.py -j <config file> --rollback <version>
For Example:
python multi_host_upgrade_script.py -j esxi_host.json --rollback 6.1.00