Configure GigaVUE Fabric Components using VMware ESXi

This topic provides instructions on how to deploy the fabric components for VMware ESXi.

Note:  When registering GigaVUE V Series Nodes in GigaVUE-FM, make sure that the connection name under each Monitoring Domain is unique. When GigaVUE-FM version is 6.10.00 or above and the Fabric Components are on (n-1) or (n-2) versions, you must create a Username and Password instead of using tokens in the registration data. For more details, refer to the Configure Role-Based Access for Third-Party Orchestration section in the 6.9 Documentation.

Recommended Instance Type

The following table lists the recommended instance type for deploying the fabric components:

Compute Instances

vCPU

Memory

GigaVUE V Series Node

4vCPU

8GB

UCT-V Controller

2vCPU

4GB

Refer to the following topics for more details on how to register the fabric components with GigaVUE‑FM after deploying the fabric components using VMware ESXi on the host server.

Fabric Component Registration for Deployments

The following table displays the fabric components registration required for various deployments.

Application

Deployment

GigaVUE V Series Registration

UCT-V Registration

UCT-V Controller Registration

Linux AMX, 5G Cloud, GV-HTTP2, 5G-SBI, Sbipoe Required NA NA
GigaSMART AMI, AFI, Slicing, App Viz, Dedup, Header Stripping, Load Balancing, Masking Required Required Required

GigaVUE V SeriesNode Deployment and Registration

The following architecture diagram explains the deployment of GigaVUE V Series Node in VMware ESXi and registration of the V Series Node with GigaVUE‑FM.

The architecture includes a HC Series device that is connected to the VMware ESXi server through a data port. The VMware ESXi server has virtual switches and V Series node that communicate with the HC Series device, GigaVUE-FM and tools through the Management, Data, and Tool port groups. Each port group is mapped to a unique virtual switch to ensure smooth transmission of data, management, and tool traffic. The V Series node is deployed with the Linux or GigaSMART applications. The Data port group carries monitored traffic from the H Series to the V Series, and the Tool port group connects the V Series to the external tools.

Deployment Mode

The GigaVUE fabric components support the following deployment modes:

■   Single uplink for Management, Data and Tool connectivity
■   Single uplink for Management and Tool connectivity, and another uplink for Data connectivity
■   Separate uplinks (three) for Management, Data, and Tool connectivity

Prerequisites

The following prerequisites are required for deployment of the GigaVUE V Series Node in VMware ESXi.

■   Configuring port groups: Create a Management Port Group for connectivity with GigaVUE‑FM, a Data Port Group to receive data from the H Series node, and a Tool Port Group for connectivity with the tools. Refer to VMware Documentation for more information.
■   Configuring virtual switch: Create unique virtual switches for each port group. Refer to VMware Documentation for more information.
■   Configuring monitoring domain: Create a monitoring domain in the GigaVUE‑FM UI. The Connection name must be unique across the monitoring domains. Refer to Create Monitoring Domain topic for more information.
■   Configuring token: Create a token for registration of V Series Node with GigaVUE‑FM. Refer to Configure Tokens topic for more information.
■   Download the OVA files from the Gigamon Community portal and extract it to get the OVF and VMDK files.

Deploy and Register GigaVUE V SeriesNode

Do the following steps:

1.   Log into the VMware ESXi web interface.
2. Right-click the ESXi Host, Cluster, or data center on which you want to deploy the GigaVUE V Series Node and then select Create/Register VM. The New Virtual Machine wizard appears.
3. In the Select Creation Type page, select Deploy a virtual machine from an OVF and OVA file option.
4. Click Next. The Select OVF and VMDK files page appears.
5. In the Select OVF and VMDK files page, enter a unique name for the virtual machine and upload the ,ovf and ,vmdk files from your local machine.

Note:  Refer to the OVF Package Files table for selecting the required OVF and VMDK files.

6. Click Next and the Select storage page appears.
7. . Select a datastore where the virtual machine’s files will be stored.

Note:   It is recommended to use datastore that has Solid-State Drive (SSD) drive type for AMX deployments to get a better performance.

8. Click Next. The Deployment Options page appears.
9. In the Deployment Options page, select the management port, data port, and tool port as referenced in the prerequisites.
a. Select the Deployment Type from the list:
•   Do Not Use DHCP – Select this option if you want to use static IP addresses for the management, data, and tool ports
•   Management, Data and Tool Port DHCP – Select this option if you want to use dynamic IP addresses for the management, data, and tool ports
•   Management Port DHCP – Select this option if you want to use dynamic IP address only for the management port
•   Tool Port DHCP - Select this option if you want to use dynamic IP address only for the tool port
•   Data Port DHCP – Select this option if you want to use dynamic IP address only for the data port
b. Select Thin in the Disk Provisioning field.
c. Unselect the Power on automatically checkbox. This is selected by default.

Note:  It is recommended to disable the Power on automatically option and review all configuration before switching on the virtual machine.

10. Click Next. The Additional Settings page appears.
11. Do the following configuration in the Additional Settings page.
a. In the System section, enter the hostname of the V Series node instance in the Hostname field and create a new admin password for the V Series node instance in the Administrative Login Password field.

Note:  This credential will be used for V series SSH access. The default username is gigamon. If the deployment fails, you can login to V Series SSH/Console and check the logs for troubleshooting.

b. In the Network Connectivity section, enter the required fields based on the selected network configuration.

Note:  Ensure to unselect the Management Port DHCP checkbox if you want to use static IP address for the management port. If you select the Management Port DHCP checkbox, dynamic IP address will be configured for the management port even if you have selected Do Not Use DHCP option in the Configuration page.

Note:  If you do not enter value for the Management Port MTU size in bytes, the default value of 1500B is considered.

c. Enter the DNS server address in the Nameserver field to resolve the domain name of the tool destination URL.
d. In the Optional Parameters section, enter the monitoring domain name in the GroupName field and connection name in the SubGroupName field.

Note:  The monitoring domain and connection name corresponds to the domain name and connection referenced in the prerequisites section.

e. Enter the token created in GigaVUE FM UI in the JWT Token used for registration field.
f. Enter the GigaVUE FM IP address and remote port in the RemoteIP and RemotePort fields respectively.
g. In the Custom node properties field enter one of the following:
•   app_mode=linux_apps
•   app_mode=gs_apps

Note:  Refer to Configure GigaVUE Fabric Components using VMware ESXi table.

12. Click Next and the Ready to complete page appears.
13. Review all the entered information and then click Finish. When the operation completes, you have successfully deployed a GigaVUE V Series node.

Note:  You can modify the CPU, Memory and Disk if required to handle higher traffic load, before powering on the VM.

Verification and Troubleshooting

During the initial bringup, the V Series reboots multiple times. After a wait time of 5 minutes, you can check the status of the deployment. If the status is failed, you can check the logs to perform troubleshooting.

OVF Package Files

Form Factor

Supported Ports

File Name

Comments

Small (2vCPU, 4GB Memory, and 8GB Disk space)

Mgmt Port, Tool Port, and 8 Network Ports

vseries-node-file1.ovf

Use these files when deploying GigaVUE V Series Node via VMware vCenter.

Medium (4vCPU, 8GB Memory, and 8GB Disk space)

Mgmt Port , Data Port, and Tool Port

vseries-node-file2.ovf

Large (8vCPU, 16GB Memory, and 8GB Disk space)

Mgmt Port, Tool Port, and 8 Network Ports

vseries-node-file3.ovf

Small (2vCPU, 4GB Memory, and 8GB Disk space)

Mgmt Port , Data Port, and Tool Port

vseries-node-file4.ovf

Use these when deploying GigaVUE V Series Node via VMware NSX-T Manager.

Medium (4vCPU, 8GB Memory, and 8GB Disk space)

Mgmt Port , Data Port, and Tool Port

vseries-node-file5.ovf

Large (8vCPU, 16GB Memory, and 8GB Disk space)

Mgmt Port, Tool Port, and 2 Network Ports

vseries-node-file6.ovf

Small (2vCPU, 4GB Memory, and 8GB Disk space)

Supported Ports

vseries-node-file7.ovf

Use these files when deploying GigaVUE V Series Node via VMware ESXi without vCenter.

Medium (4vCPU, 8GB Memory, and 8GB Disk space)

Mgmt Port, Tool Port, and 8 Network Ports

vseries-node-file8.ovf

Large (8vCPU, 16GB Memory, and 8GB Disk space)

Mgmt Port , Data Port, and Tool Port

vseries-node-file9.ovf

Larger (8vCPU, 16GB Memory, and 80GB Disk space)

Mgmt Port, Tool Port, and 8 Network Ports

vseries-node-file12.ovf

Use these files when deploying GigaVUE V Series Node via VMware vCenter and if you wish to configure AMX application.

Larger (8vCPU, 16GB Memory, and 80GB Disk space)

Mgmt Port , Data Port, and Tool Port

vseries-node-file15.ovf

Use these files when deploying GigaVUE V Series Node via VMware ESXi without vCenter and if you wish to configure AMX application.

Note:  This file supports form factor with higher range of CPU, memory and disk space.

 

Mgmt Port , Data Port, and Tool Port

vseries-node-file16.ovf

minipc - Virtual Small Form Factor

Assign Static IP address for GigaVUE V Series

The static IP addresses are assigned to the GigaVUE V Series node in the following scenarios:

• When you have assigned or selected DHCP for the port groups during deployment and want to change it to static IP address after deployment.

• When you have assigned static IP address for the port groups during deployment and want to update the assigned static IP address.

To assign a static IP address, perform the following steps:

  1. Navigate to /etc/netplan/ directory.
  2. Create a new .yaml file. (sudo vi /etc/netplan/55-gigamon-netconfig.yaml)
  3. Update the file as shown in the following sample:
Copy
Example netplan config:

network:
  version: 2
  renderer: NetworkManager
  ethernets:
    ens3:
      addresses:
        - 10.114.53.24/21
      dhcp4: no
      dhcp6: no
      accept-ra: false
      routes:
        - to: 10.114.48.1/32
          scope: link
        - to: default
          via: 10.114.48.1
    ens4:
      addresses:
        - 10.115.53.24/21
      dhcp4: no
      dhcp6: no
      accept-ra: false
      routes:
        - to: 10.115.48.1/32
          scope: link
        - to: default
          via: 10.115.48.1
  1. Save the file.
  2. Apply the configuration.

    $ sudo netplan apply
  3. Restart the GigaVUE V Series service.
$ sudo service vseries-node restart

Note:  By default, the GigaVUE V Series gets assigned an IP address using DHCP.

Register UCT-V Controller

IMPORTANT: You must enable the basic authentication to launch the GigaVUE fabric components for version 6.9 and lower. For more instructions on the steps to enable the basic authentication, refer to Authentication Type.

Deploy UCT-V Controller through VMware vCenter on the host server.

To register UCT-V Controller after launching a Virtual Machine using a configuration file, perform the following steps:

  1. Log in to the UCT-V Controller.
  2. Create a local configuration file (/etc/gigamon-cloud.conf) and enter the following user data.
  3. Copy
    Registration:
        groupName: <Monitoring Domain Name>
        subGroupName: <Connection Name>
        token: <Token>
        remoteIP: <IP address of the GigaVUE-FM>
        sourceIP: <IP address of UCT-V Controller> (Optional Field)
        remotePort: 443
  4. When using Static IP configuration or multiple interfaces with Static IP configuration, create a new .yaml file in /etc/netplan/ directory.
  5. Update the file and save it.
  6. Restart the UCT-V Controller service.
    $ sudo service uctv-cntlr restart

To assign a static IP address, perform the following steps:

1.   Navigate to /etc/netplan/ directory.
2. Create a new .yaml file.

Note:  Do not use the default 50-cloud-init.yaml file.

3. Update the file as shown in the following sample:
Copy
network:
  version: 2
  renderer: NetworkManager
  ethernets:
    <interface>:                # Replace with your actual interface name (e.g., eth0)
      dhcp4: no
      dhcp6: no
      addresses:
        - <IPV4/24>             # e.g., 192.168.1.10/24
        - <IPV6/64>             # e.g., 2001:db8:abcd:0012::1/64
      nameservers:
        addresses:
          - <DNS_IPV4>          # e.g., 8.8.8.8
          - <DNS_IPV6>          # e.g., 2001:4860:4860::8888
      routes:
        - to: 0.0.0.0/0
          via: <IPV4_GW>        # e.g., 192.168.1.1
        - to: ::/0
          via: <IPV6_GW>        # e.g., 2001:db8:abcd:0012::fffe
                        
Example netplan config:

network:
  version: 2
  renderer: NetworkManager
  ethernets:
    ens3:
      addresses:
        - 10.114.53.24/21
      dhcp4: no
      dhcp6: no
      accept-ra: false
      routes:
        - to: 10.114.48.1/32
          scope: link
        - to: default
          via: 10.114.48.1
4. Save the file.
5. Apply the configuration.
$ sudo netplan apply

Register UCT-V

To register UCT-V after launching a Virtual Machine using a configuration file,perform the following steps:

  1. Install the UCT-V in the Linux or Windows platform. For detailed instructions, refer to Linux UCT-V Installation and Windows UCT-V Installation.

  2. Log in to the UCT-V.
  3. Create a local configuration file and enter the following user data.
    • /etc/gigamon-cloud.conf is the local configuration file in Linux platform.
    • C:\ProgramData\uctv\gigamon-cloud.conf is the local configuration file in Windows platform.
    • When creating C:\ProgramData\uctv\gigamon-cloud.conf file, ensure that the file name extension is .conf. To view the file name extension in Windows, perform the following steps:
      1. Go to File Explorer and open the File Location.
      2. On the top navigation bar, select View.
      3. In the View tab, enable the File name extensions check box.
    Copy
    Registration:
        groupName: <Monitoring Domain Name>
        subGroupName: <Connection Name>
        token: <Token>
        remoteIP: <IP address of the UCT-V Controller 1>, <IP address of the UCT-V Controller 2>
  4. Restart the UCT-V service.

    Note:  Before restarting the UCT-V service, update the /etc/uctv/uctv.conf file with network interface information to tap traffic and outgoing interface of tapped traffic.

    • Linux platform:
      $ sudo service uctv restart
    • Windows platform: Restart from the Task Manager.

Verification and Troubleshooting

After applying the configuration, the should register with GigaVUE-FM.

After successful registration the sends heartbeat messages to every 30 seconds.

If one heartbeat is missing- Status: Unhealthy.

If five consecutive heartbeats fail- attempts to reach

If that fails unregisters the and removes from .

Post Configuration Steps for Exporting Metadata for Mobile Networks using AMX

If you are deploying the GigaVUE V Series Node to configure AMX application to export enriched metadata for mobile networks, perform the following steps:

  1. Select Edit on the VM page in the VMware ESXi. The Edit Settings page appears.
  2. In the Virtual Hardware tab, edit the following fields:
    • CPU: 40
    • Memory: 128GB
    • Hard disk 1: 200GB
    • (optional) If you wish to get higher throughput, change the Adapter type for the Network Adapter to SR-IOV passthrough.

    When exporting GigaVUE enriched Metadata for Mobile Networks using AMX application, you can also configure the GigaVUE V Series Node used to deploy AMX application in GFM-HW2-FM001-HW. For instruction, refer to GigaVUE-FM Hardware Appliances Guide.

    For information about how to configure the AMX application, refer to Application Metadata Exporter.

Edit the ring buffer settings

For a high transactional ingress environment, perform the following steps to edit the ring buffer settings:

Note:  Perform these steps consistently every time after rebooting the GigaVUE V Series Nodes.

  1. Log in to the GigaVUE V Series Node.
  2. Use the following command to view the maximum pre-set hardware settings value and your current hardware settings.
    sudo ethtool -g <interface name>
  3. Verify that the ingress interface ring buffers are set to the maximum supported values.

The GigaVUE V Series Node deployed in VMware ESXi host appears in Third-party Orchestration Monitoring Domain page of GigaVUE‑FM.

Procedure to deploy V Series Node in VMware ESXi with SR-IOV Adapter

Perform the following steps when you deploy V Series Node in VMware ESXi host with SR-IOV Adapter:

  1. On the VM page in the VMware ESXi host environment, select Edit.

    The Edit Settings page appears.
  2. In the Virtual Hardware tab, edit the following fields:
    • CPU: 8
    • Memory: 16GB
    • Hard disk 1: 80GB
    • Network adapter 1: VM Network (Connected)
    • Network adapter 2: Port Group (Connected)
    • Network adapter 3: Port Group (Connected)
    • Video card: 4MB

    Note:  Make sure to select Reserve all guest memory for VM Memory.

    Deploy V Series Node with OVF15 template (Large Form Factor) with Management, Tool, and Data Ports. The Port-Group mappings and Netplan configs are as follows:

    1. Port-Group Mapping:

      • ens160: Mapped with VMNetwork

      • ens192 and ens224: Correctly mapped with the Port Groups that the user creates

      Sample Netplan Configs:

      • ens160 with 192.168.10.X
      • ens192 with 192.168.20.X
      Copy
      Example netplan config:

      gigamon@vseries:/etc/netplan$ more 60-netcfg.yaml
      network:
        version: 2
        renderer: NetworkManager
        ethernets:
          ens160:
            dhcp4: no
            dhcp6: no
            addresses:
              - 10.115.203.139/21
              - 2001:db8:1::10/64
            routes:
              - to: default
                via: 10.115.200.1
              - to: default
                via: 2001:db8:1::1
          ens192:
            dhcp4: no
            dhcp6: yes
            addresses:
              - 192.150.10.25/24
            routes:
              - to: 192.150.10.0/24
                scope: link

         ens224:
            dhcp4: no
            dhcp6: yes
            addresses:
              - 10.210.16.210/20
            routes:
              - to: 10.210.16.0/24
                scope: link
  3. Power off VM and remove Network Adapter 2 and Network Adapter 3. Now, without saving, add two new Network Adapters and change the Adapter Type to SR-IOV passthrough.

    Once added, the user-created Port-Group mappings for ens192 and ens224 get swapped.

  4. In Edit Settings, swap the adapters to correct the configuration mismatch with Netplan configs.
  5. Save the configuration and deploy the VM.

    Now, ens192 and ens224 are mapped with the correct Port Group Mappings.

  6. Use the following command to manually configure /etc/gigamon-cloud.conf with registration configurations to register V Series Node with GigaVUE‑FM.
    gigamon@vsn-5gc-new:~$ cat /etc/gigamon-cloud.conf
  7. Under the additional settings page, provide the user data as shown below:
    • GroupName: <Monitoring domain name>
    • SubGroupName: < Connection name>
    • token: <Token>
    • remoteIP: <IP address of the GigaVUE-FM>
    • remotePort: 443