Configure GigaVUE Fabric Components using VMware ESXi

This topic provides instructions on how to deploy the fabric components for VMware ESXi.

Note:  When registering GigaVUE V Series Nodes in GigaVUE-FM, make sure that the connection name under each Monitoring Domain is unique. When GigaVUE-FM version is 6.10.00 or above and the Fabric Components are on (n-1) or (n-2) versions, you must create a Username and Password instead of using tokens in the registration data. For more details, refer to the Configure Role-Based Access for Third-Party Orchestration section in the 6.9 Documentation.

Recommended Instance Type

The following table lists the recommended instance type for deploying the fabric components:

Compute Instances

vCPU

Memory

GigaVUE V Series Node

4vCPU

8GB

UCT-V Controller

2vCPU

4GB

GigaVUE V SeriesNode Deployment and Registration

The following image illustrates a scenario for deploying and registering a GigaVUE V Series node. In this example, a GigaVUE HC Series device monitors north-south traffic and forwards Application Metadata to a SIEM/observability tool that accepts only JSON records. The entire solution is managed through GigaVUE-FM.

The GigaVUE HC Series uses Application Metadata Intelligence to generate application metadata in CEF format. Application Metadata Exporter, running on the GigaVUE V Series node, converts these CEF records to JSON before exporting them to the tool over HTTPS or Kafka.

Note:  AMI is supported on both GigaVUE HC Series and GigaVUE V Series platforms. You can also deploy a separate GigaVUE V Series node to generate application metadata for monitoring east-west traffic.

You can deploy the GigaVUE V Series Node using the following deployment modes based on your requirement.

■   Single uplink for Management, Data and Tool connectivity.
■   Single uplink for Management and Tool connectivity, and another uplink for Data connectivity.
■   (Recommended) Separate uplinks (three) for Management, Data, and Tool connectivity.

Prerequisites

The following prerequisites are required for deployment of the GigaVUE V Series Node in VMware ESXi.

■   Configuring port groups: Create a Management Port Group for connectivity with GigaVUE‑FM, a Data Port Group to receive data from the H Series node, and a Tool Port Group for connectivity with the tools.
■   Configuring virtual switch: Create unique virtual switches for each port group. Refer to VMware Documentation for more information.
■   Download the OVA files from the Gigamon Community portal and extract it to get the OVF and VMDK files. Select vseries-node-file.ovf for AMX deployment.

Deploy and Register GigaVUE V SeriesNode

To deploy and register:

1.   Log into the VMware ESXi web interface.
2. Right-click the ESXi Host, Cluster, or data center on which you want to deploy the GigaVUE V Series Node and then select Create/Register VM. The New Virtual Machine wizard appears.
3. In the Select Creation Type page, select Deploy a virtual machine from an OVF and OVA file option.
4. Click Next. The Select OVF and VMDK files page appears.
5. In the Select OVF and VMDK files page, enter a unique name for the virtual machine and upload the .ovf and .vmdk files from your local machine.
6. Click Next and the Select storage page appears.
7. Select a datastore where the virtual machine’s files will be stored.
8. Click Next. The Deployment Options page appears.
9. In the Deployment Options page, select the management port, data port, and tool port as referenced in the prerequisites.
a. Select the Deployment Type from the list:
•   Do Not Use DHCP – Select this option if you want to use static IP addresses for the management, data, and tool ports.
•   Management, Data and Tool Port DHCP – Select this option if you want to use dynamic IP addresses for the management, data, and tool ports.
•   Management Port DHCP – Select this option if you want to use dynamic IP address only for the management port.
•   Tool Port DHCP - Select this option if you want to use dynamic IP address only for the tool port.
•   Data Port DHCP – Select this option if you want to use dynamic IP address only for the data port.
b. Select Thin/Thick in the Disk Provisioning field.
c. Unselect the Power on automatically checkbox. This is selected by default. It is recommended to disable the Power on automatically option and review all configuration before switching on the virtual machine.
10. Click Next. The Additional Settings page appears.
11. Do the following configuration in the Additional Settings page.
a. In the System section, enter the hostname of the V Series node instance in the Hostname field and create a new admin password for the V Series node instance in the Administrative Login Password field.

This credential will be used for V series SSH access. The default username is gigamon. If the deployment fails, you can login to V Series SSH/Console and check the logs for troubleshooting.

b. In the Network Connectivity section, enter the required fields based on the selected network configuration.

Note:  Ensure to unselect the Management Port DHCP checkbox if you want to use static IP address for the management port. If you select the Management Port DHCP checkbox, dynamic IP address will be configured for the management port even if you have selected Do Not Use DHCP option in the Configuration page.

c. Enter the required value in the Management Port MTU size in bytes field. The default value is 1500B.
d. Enter the DNS server address in the Nameserver field to resolve the domain name of the tool destination URL.
e. In the Optional Parameters section, enter the Monitoring Domain name in the GroupName field and connection name in the SubGroupName field. The Monitoring Domain and connection name corresponds to the domain name and connection that you created during the Configure GigaVUE Fabric Components using VMware ESXi configuration.
f. Enter the token created in GigaVUE-FM in the JWT Token used for registration field. Refer to Configure GigaVUE Fabric Components using VMware ESXi.
g. Enter the GigaVUE-FM IP address and remote port in the RemoteIP and RemotePort fields respectively. In case of any error, you should re-deploy the V Series.
h. In the Custom node properties field enter app_mode=linux_apps (mandatory).
12. Click Next and the Ready to complete page appears.
13. Review all the entered information and modify the configuration if required.
14. Click Finish. When the operation completes, you have successfully deployed a GigaVUE V Series Node.

Verify GigaVUE V Series Node Registration

During the initial bring up, the V Series reboots multiple times for initialization. After few minutes, you can check the status of the deployment in GigaVUE-FM. If the status is failed, you can check the logs to perform troubleshooting.

To check the logs:

1.   Log in to V Series Node via console or SSH to management IP address.
2. Enter the below path in the terminal to check the registration details.
tail -f /var/log/vseries-node-reg.log

A single AMX instance processes traffic within the published performance KPIs. Deploy additional instances when traffic volume exceeds these thresholds or when packet drops occur, or CPU or memory utilization remains consistently high, to maintain stable and efficient performance. For details regarding performance KPIs, Contact Technical Support.

OVF Package Files

Form Factor

Supported Ports

File Name

Comments

Small (2vCPU, 4GB Memory, and 8GB Disk space)

Mgmt Port, Tool Port, and 8 Network Ports

vseries-node-file1.ovf

Use these files when deploying GigaVUE V Series Node via VMware vCenter.

Medium (4vCPU, 8GB Memory, and 8GB Disk space)

Mgmt Port , Data Port, and Tool Port

vseries-node-file2.ovf

Large (8vCPU, 16GB Memory, and 8GB Disk space)

Mgmt Port, Tool Port, and 8 Network Ports

vseries-node-file3.ovf

Small (2vCPU, 4GB Memory, and 8GB Disk space)

Mgmt Port , Data Port, and Tool Port

vseries-node-file4.ovf

Use these when deploying GigaVUE V Series Node via VMware NSX-T Manager.

Medium (4vCPU, 8GB Memory, and 8GB Disk space)

Mgmt Port , Data Port, and Tool Port

vseries-node-file5.ovf

Large (8vCPU, 16GB Memory, and 8GB Disk space)

Mgmt Port, Tool Port, and 2 Network Ports

vseries-node-file6.ovf

Small (2vCPU, 4GB Memory, and 8GB Disk space)

Supported Ports

vseries-node-file7.ovf

Use these files when deploying GigaVUE V Series Node via VMware ESXi without vCenter.

Medium (4vCPU, 8GB Memory, and 8GB Disk space)

Mgmt Port, Tool Port, and 8 Network Ports

vseries-node-file8.ovf

Large (8vCPU, 16GB Memory, and 8GB Disk space)

Mgmt Port , Data Port, and Tool Port

vseries-node-file9.ovf

Larger (8vCPU, 16GB Memory, and 80GB Disk space)

Mgmt Port, Tool Port, and 8 Network Ports

vseries-node-file12.ovf

Use these files when deploying GigaVUE V Series Node via VMware vCenter and if you wish to configure AMX application.

Larger (8vCPU, 16GB Memory, and 80GB Disk space)

Mgmt Port , Data Port, and Tool Port

vseries-node-file15.ovf

Use these files when deploying GigaVUE V Series Node via VMware ESXi without vCenter and if you wish to configure AMX application.

Note:  This file supports form factor with higher range of CPU, memory and disk space.

 

Mgmt Port , Data Port, and Tool Port

vseries-node-file16.ovf

minipc - Virtual Small Form Factor

Assign Static IP address for GigaVUE V Series

By default, the GigaVUE V Series gets assigned an IP address using DHCP.

The static IP addresses are assigned to the GigaVUE V Series node in the following scenarios:

• When you have assigned or selected DHCP for the port groups during deployment and want to change it to static IP address after deployment.

• When you have assigned static IP address for the port groups during deployment and want to update the assigned static IP address.

To assign a static IP address, perform the following steps:

1.   Navigate to /etc/netplan/ directory.
2. Create a new .yaml file.

Note:  Do not use the default 50-cloud-init.yaml file.

3. Update the file as shown in the following sample:
Copy
network:
  version: 2
  renderer: NetworkManager
  ethernets:
    <interface>:                # Replace with your actual interface name (e.g., eth0)
      dhcp4: no
      dhcp6: no
      addresses:
        - <IPV4/24>             # e.g., 192.168.1.10/24
        - <IPV6/64>             # e.g., 2001:db8:abcd:0012::1/64
      nameservers:
        addresses:
          - <DNS_IPV4>          # e.g., 8.8.8.8
          - <DNS_IPV6>          # e.g., 2001:4860:4860::8888
      routes:
        - to: 0.0.0.0/0
          via: <IPV4_GW>        # e.g., 192.168.1.1
        - to: ::/0
          via: <IPV6_GW>        # e.g., 2001:db8:abcd:0012::fffe
                        
Example netplan config:

network:
  version: 2
  renderer: NetworkManager
  ethernets:
    ens3:
      addresses:
        - 10.114.53.24/21
      dhcp4: no
      dhcp6: no
      accept-ra: false
      routes:
        - to: 10.114.48.1/32
          scope: link
        - to: default
          via: 10.114.48.1
4. Save the file.
5. Apply the configuration.
$ sudo netplan apply

6. Restart the GigaVUE V Series service.

$ sudo service vseries-node restart

The deployed GigaVUE V Series registers with the GigaVUE‑FM. After successful registration the GigaVUE V Series sends heartbeat messages to GigaVUE‑FM every 30 seconds. If one heartbeat is missing, the fabric component status appears as Unhealthy. If more than five heartbeats fail to reach GigaVUE‑FM, GigaVUE‑FM tries to reach the GigaVUE V Series. If that fails as well then GigaVUE‑FM unregisters the GigaVUE V Series and removes from GigaVUE‑FM.

Register UCT-V Controller

IMPORTANT: You must enable the basic authentication to launch the GigaVUE fabric components for version 6.9 and lower. For more instructions on the steps to enable the basic authentication, refer to Authentication Type.

Deploy UCT-V Controller through VMware vCenter on the host server.

To register UCT-V Controller after launching a Virtual Machine using a configuration file, perform the following steps:

  1. Log in to the UCT-V Controller.
  2. Create a local configuration file (/etc/gigamon-cloud.conf) and enter the following user data.
    Refer to Configure Tokens for token creation details.
  3. Copy
    Registration:
        groupName: <Monitoring Domain Name>
        subGroupName: <Connection Name>
        token: <Token>
        remoteIP: <IP address of the GigaVUE-FM>
        sourceIP: <IP address of UCT-V Controller> (Optional Field)
        remotePort: 443
  4. When using Static IP configuration or multiple interfaces with Static IP configuration, create a new .yaml file in /etc/netplan/ directory.
  5. Update the file and save it.
  6. Restart the UCT-V Controller service.
    $ sudo service uctv-cntlr restart

Assign Static IP address for UCT-V Controller

By default, the UCT-V Controller gets assigned an IP address using DHCP.

To assign a static IP address, perform the following steps:

1.   Navigate to /etc/netplan/ directory.
2. Create a new .yaml file.

Note:  Do not use the default 50-cloud-init.yaml file.

3. Update the file as shown in the following sample:
Copy
network:
  version: 2
  renderer: NetworkManager
  ethernets:
    <interface>:                # Replace with your actual interface name (e.g., eth0)
      dhcp4: no
      dhcp6: no
      addresses:
        - <IPV4/24>             # e.g., 192.168.1.10/24
        - <IPV6/64>             # e.g., 2001:db8:abcd:0012::1/64
      nameservers:
        addresses:
          - <DNS_IPV4>          # e.g., 8.8.8.8
          - <DNS_IPV6>          # e.g., 2001:4860:4860::8888
      routes:
        - to: 0.0.0.0/0
          via: <IPV4_GW>        # e.g., 192.168.1.1
        - to: ::/0
          via: <IPV6_GW>        # e.g., 2001:db8:abcd:0012::fffe
                        
Example netplan config:

network:
  version: 2
  renderer: NetworkManager
  ethernets:
    ens3:
      addresses:
        - 10.114.53.24/21
      dhcp4: no
      dhcp6: no
      accept-ra: false
      routes:
        - to: 10.114.48.1/32
          scope: link
        - to: default
          via: 10.114.48.1
4. Save the file.
5. Apply the configuration.
$ sudo netplan apply

6. Restart the UCT-V Controller service.

$ sudo service uctv-cntlr restart

The deployed UCT-V Controller registers with the GigaVUE‑FM. After successful registration the UCT-V Controller sends heartbeat messages to GigaVUE‑FM every 30 seconds. If one heartbeat is missing, the fabric component status appears as Unhealthy. If more than five heartbeats fail to reach GigaVUE‑FM, GigaVUE‑FM tries to reach the UCT-V Controller. If that fails as well then GigaVUE‑FM unregisters the UCT-V Controller and removes from GigaVUE‑FM.

Note:  When you deploy GigaVUE V Series Nodes or UCT-V Controllers using Third Party orchestration, you cannot delete the monitoring domain without unregistering the V Series Nodes or UCT-V Controllers.

Register UCT-V

To register UCT-V after launching a Virtual Machine using a configuration file,perform the following steps:

  1. Install the UCT-V in the Linux or Windows platform. For detailed instructions, refer to Linux UCT-V Installation and Windows UCT-V Installation.

  2. Log in to the UCT-V.
  3. Create a local configuration file and enter the following user data.
    • /etc/gigamon-cloud.conf is the local configuration file in Linux platform.
    • C:\ProgramData\uctv\gigamon-cloud.conf is the local configuration file in Windows platform.
    • When creating C:\ProgramData\uctv\gigamon-cloud.conf file, ensure that the file name extension is .conf. To view the file name extension in Windows, perform the following steps:
      1. Go to File Explorer and open the File Location.
      2. On the top navigation bar, select View.
      3. In the View tab, enable the File name extensions checkbox.
    Copy
    Registration:
        groupName: <Monitoring Domain Name>
        subGroupName: <Connection Name>
        token: <Token>
        remoteIP: <IP address of the UCT-V Controller 1>, <IP address of the UCT-V Controller 2>
  4. Restart the UCT-V service.

    Note:  Before restarting the UCT-V service, update the /etc/uctv/uctv.conf file with network interface information to tap traffic and outgoing interface of tapped traffic.

    • Linux platform:
      $ sudo service uctv restart
    • Windows platform: Restart from the Task Manager.

Verification and Troubleshooting

After applying the configuration, the should register with GigaVUE-FM.

After successful registration the sends heartbeat messages to every 30 seconds.

If one heartbeat is missing- Status: Unhealthy.

If five consecutive heartbeats fail- attempts to reach

If that fails unregisters the and removes from.

Post Configuration Steps for Exporting Metadata for Mobile Networks using AMX

If you are deploying the GigaVUE V Series Node to configure AMX application to export enriched metadata for mobile networks, perform the following steps:

  1. Select Edit on the VM page in the VMware ESXi. The Edit Settings page appears.
  2. In the Virtual Hardware tab, edit the following fields:
    • CPU: 40
    • Memory: 128GB
    • Hard disk 1: 200GB
    • (optional) If you wish to get higher throughput, change the Adapter type for the Network Adapter to SR-IOV passthrough.

    When exporting GigaVUE enriched Metadata for Mobile Networks using AMX application, you can also configure the GigaVUE V Series Node used to deploy AMX application in GFM-HW2-FM001-HW. For instruction, refer to GigaVUE-FM Hardware Appliances Guide.

    For information about how to configure the AMX application, refer to Application Metadata Exporter.

Edit the ring buffer settings

For a high transactional ingress environment, perform the following steps to edit the ring buffer settings:

Note:  Perform these steps consistently every time after rebooting the GigaVUE V Series Nodes.

  1. Log in to the GigaVUE V Series Node.
  2. Use the following command to view the maximum pre-set hardware settings value and your current hardware settings.
    sudo ethtool -g <interface name>
  3. Verify that the ingress interface ring buffers are set to the maximum supported values.

The GigaVUE V Series Node deployed in VMware ESXi host appears in Third-party Orchestration Monitoring Domain page of GigaVUE‑FM.

Procedure to deploy V Series Node in VMware ESXi with SR-IOV Adapter

Perform the following steps when you deploy V Series Node in VMware ESXi host with SR-IOV Adapter:

  1. On the VM page in the VMware ESXi host environment, select Edit.

    The Edit Settings page appears.
  2. In the Virtual Hardware tab, edit the following fields:
    • CPU: 8
    • Memory: 16GB
    • Hard disk 1: 80GB
    • Network adapter 1: VM Network (Connected)
    • Network adapter 2: Port Group (Connected)
    • Network adapter 3: Port Group (Connected)
    • Video card: 4MB

    Note:  Make sure to select Reserve all guest memory for VM Memory.

    Deploy V Series Node with OVF15 template (Large Form Factor) with Management, Tool, and Data Ports. The Port-Group mappings and Netplan configs are as follows:

    1. Port-Group Mapping:

      • ens160: Mapped with VMNetwork

      • ens192 and ens224: Correctly mapped with the Port Groups that the user creates

      Sample Netplan Configs:

      • ens160 with 192.168.10.X
      • ens192 with 192.168.20.X
      Copy
      Example netplan config:

      gigamon@vseries:/etc/netplan$ more 60-netcfg.yaml
      network:
        version: 2
        renderer: NetworkManager
        ethernets:
          ens160:
            dhcp4: no
            dhcp6: no
            addresses:
              - 10.115.203.139/21
              - 2001:db8:1::10/64
            routes:
              - to: default
                via: 10.115.200.1
              - to: default
                via: 2001:db8:1::1
          ens192:
            dhcp4: no
            dhcp6: yes
            addresses:
              - 192.150.10.25/24
            routes:
              - to: 192.150.10.0/24
                scope: link

         ens224:
            dhcp4: no
            dhcp6: yes
            addresses:
              - 10.210.16.210/20
            routes:
              - to: 10.210.16.0/24
                scope: link
  3. Power off VM and remove Network Adapter 2 and Network Adapter 3. Now, without saving, add two new Network Adapters and change the Adapter Type to SR-IOV passthrough.

    Once added, the user-created Port-Group mappings for ens192 and ens224 get swapped.

  4. In Edit Settings, swap the adapters to correct the configuration mismatch with Netplan configs.
  5. Save the configuration and deploy the VM.

    Now, ens192 and ens224 are mapped with the correct Port Group Mappings.

  6. Use the following command to manually configure /etc/gigamon-cloud.conf with registration configurations to register V Series Node with GigaVUE‑FM.
    gigamon@vsn-5gc-new:~$ cat /etc/gigamon-cloud.conf
  7. Under the additional settings page, provide the user data as shown below:
    • GroupName: <Monitoring domain name>
    • SubGroupName: < Connection name>
    • token: <Token>
    • remoteIP: <IP address of the GigaVUE-FM>
    • remotePort: 443

      Refer to Configure Tokens for token creation details.