Configuration of 5G CUPS using Ansible
The Gigamon Ansible Module consists of various playbooks that converts high level user intent in YAML format into JSON format.
The GIgaVUE-FM uses the JSON format and translates the inputs into various individual components like GigaSMART GSgroups, GSOPs, FlowMaps, etc that are configured on the Physical or Virtual devices.
The orchestration of the CUPS configuration by the GigaVUE-FM is referred to as CUPS Solution.
The Gigamon Ansible Module exposes playbooks that allows the configuration and maintenance of the CUPS solution, and the GigaVUE-FM GUI allows you to visualise, monitor and troubleshoot the CUPS solution.
Refer to the following sections for configuring the 5G CUPS solution:
• | System Requirements |
• | Installation of Gigamon Ansible Module |
• | Configuration of Pythonpath |
• | Rules and Notes |
• | 5G CUPS Solution—A Roadmap |
• | Deployment of CUPS Solution |
• | Features Supported for CUPS solution in Ansible |
System Requirements
Ensure that the following environment is available before installing 'gigamon-ansible:
- Python version: 2.7.15 or greater
- Operating System: Linux
- Ansible version: 2.9.4 or greater
- Python Packages : You can install the following packages on a need basis. It is recommended to install all the mentioned python packages.
- requests - Install this using pip install requests
- ruamel.yaml -Install this using ruamel.yaml
- jsonschema - Install this using pip install jsonschema
- netaddr - Install this using pip install netaddr
Installation of Gigamon Ansible Module
Gigamon-Ansible is available in the following packages for the different Operating System (OS) as given in the table. The package is extracted under the path /usr/local/share/gigamon.
Package | Operating System | Commands to install the Package |
RPM package | CentOS | sudo yum install <packageName>.rpm
|
Deb package | Ubuntu | sudo apt install <packageName>.deb
|
Configuration of Pythonpath
For setting the Pythonpath, add the following to ~/.baschrc and source it:
export INSTALL_DIR=/usr/local/share/gigamon
export PYTHONPATH=$PYTHONPATH:$INSTALL_DIR
Rules and Notes
You must ensure the following rules and notes before you deploy CUPS Solution:
- GigaVUE-FM is reachable from the execution of Virtual Machine.
- Uninstall the earlier version of CUPS, upgrade to the latest version, and then create CUPS solution.
- Gigamon device is installed with the required licenses.
- The required permission for RBAC is available.
- Configurations that are not handled by the CUPS playbook such as cluster formation, stack-link configuration are present.
5G CUPS Solution—A Roadmap
To configure a 5G CUPS solution, perform the following steps:
S.No | Steps | Refer to.. |
1. | Creating Inventory Directory | Creating Inventory Directory |
2. | Creating fmInfo.yml | |
3. | Creating ansible_inputs.json | Creating ansible_inputs.json |
4. | Creating CUPS inventory file |
|
5. | Creating host_vars directory | Creating host_vars directory |
6. | Creating host_vars files | Creating host_vars files |

Create an Inventory Directory to store all the CUPS related configuration files. This can be done using mkdir <dirName>.
username@fmreg26:~$ mkdir cupsSolution
username@fmreg26:~$ ls -l
drwxr-xr-x 2 ddaniel fmtaf 4096 May 11 11:59 cupsSolution

Create fmInfo.yml file inside the Inventory Directory that contains the information such as ip-address, username and password.
File name: fmInfo.yml
fmInfo:
192.168.36.2:
password: admin123A!!
username: admin
192.168.36.3:
password: admin123A!!
username: xxxxx

Create 'ansible_inputs.json' file inside the Inventory Directory that contains the following information:
- fm_credential_file - Contains details of the GigaVUE-FM that was created in Step 2.
-
yaml_payload_path - Created automatically while running the cups playbook. This file stores the payload sent to the GigaVUE-FM.
-
deployment_report_path - Created automatically while running the cups playbook. This file stores the report of the deployment.
-
golden_payload_path - When the option is enabled, the payload of a successful cups solution deployment is saved in this file.
gigamon@fmreg26:~/cupsSolution$ ls -l
-rw-r--r-- 1 gigamon fmtaf 355 May 11 12:24 ansible_inputs.json
-rw-r--r-- 1 gigamon fmtaf 172 May 11 12:10 fmInfo.yml
File name: ansible_inputs.json
Copyansible_inputs.json
{
"fm_credential_file": "/home/gigamon/automationInventoryDirectory/fmInfo.yml",
"yaml_payload_path": "/home/gigamon/automationInventoryDirectory/cups_payload_tc1.yaml",
"deployment_report_path": "/home/gigamon/automationInventoryDirectory/deploymentReport_tc1.yaml",
"golden_payload_path": "/home/gigamon/automationInventoryDirectory/golden_payload.yaml"
}

Create the cups_inventory file inside Inventory Directory.
gigamon@fmreg26:~/cupsSolution$ ls -l
total 12
-rw-r--r-- 1 gigamon fmtaf 355 May 11 12:24 ansible_inputs.json
-rw-r--r-- 1 gigamon fmtaf 396 May 11 14:22 cups_inventory
-rw-r--r-- 1 gigamon fmtaf 172 May 11 12:10 fmInfo.yml
The file contains the details of the following groups and provide the inputs to the groups as shown in the following table:
S.No | Groups-Input |
Note: —You can provide the input or leave the field empty if you don't want to use the playbook to configure the following groups. |
|
1. |
Ports—Name of the Cluster or standalone device IP that contains the ports that need to be configured. |
2. |
IPInterfaceSolution—Name of the Cluster or standalone device IP on which the IPInterfacesolution needs to be configured. |
3. |
Tool Groups—Name of the Cluster or standalone device IP on which the Tool Group needs to be configured. |
4. |
GigaStreams—Name of the Cluster or standalone device IP on which the GigaStreams needs to be configured. |
5. |
GTPWhitelist—Name of the Cluster or standalone device IP on which the GTPWhitelist Data Base needs to be configured. |
6. |
CPN—Name of the CPN or CPNs. |
7. |
UPN—Name of the UPN or UPNs. |
8. |
Sites—Name of the site or sites and the names of the CPN/UPN participating in the site or sites. |
9. |
Tags—Name of the file containing information of the solution level RBAC Tags. |
10. |
CUPS—Name of the file containing information of the solution level RBAC Tags. |
File name: cups_inventory (Single GigaVUE-FM instance)
[IPInterfaceSolution]
cluster-two
cluster-one
[ToolGroups]
cluster-two
cluster-one
[Gigastreams]
cluster-two
cluster-one
[GTPWhitelist]
cluster-two
cluster-one
[Ports]
cluster-two
cluster-one
[Policies]
5g_policy_1
[CPN]
cpnUkLTE
[UPN]
upnDallas
[Sites]
UK cpn_list='["cpnUkLTE"]' upn_list='[]'
Dallas cpn_list='[]' upn_list='["upnDallas"]'
[Tags]
tagByLocation
[CUPS]
cupsSolution1 5g_policy=5g_policy_1 tags=tagByLocation sites='["UK", "Dallas"]'
File name: cups_inventory (Multiple GigaVUE-FM instances)
[IPInterfaceSolution]
cluster-two fm_ip=192.168.36.2
cluster-one fm_ip=192.168.36.3
[ToolGroups]
cluster-two fm_ip=192.168.36.2
cluster-one fm_ip=192.168.36.3
[Gigastreams]
cluster-two fm_ip=192.168.36.2
cluster-one fm_ip=192.168.36.3
[GTPWhitelist]
cluster-two fm_ip=192.168.36.2
cluster-one fm_ip=192.168.36.3
[Ports]
cluster-two fm_ip=192.168.36.2
cluster-one fm_ip=192.168.36.3
[Policies]
5g_policy_1
[CPN]
cpnUkLTE
[UPN]
upnDallas
[Sites]
UK cpn_list='["cpnUkLTE"]' upn_list='[]'
Dallas cpn_list='[]' upn_list='["upnDallas"]'
[Tags]
tagByLocationUK
tagByLocationDallas
[CUPS]
cupsSolution1 5g_policy=5g_policy_1 tags=tagByLocationUK sites='["UK"]' fm_ip=192.168.36.2
cupsSolution2 5g_policy=5g_policy_1 tags=tagByLocationDallas sites='["Dallas"]' fm_ip=192.168.36.3

Create host_vars directory inside the Inventory Directory.
gigamon@fmreg26:~/cupsSolution$ ls -l
-rw-r--r-- 1 gigamon fmtaf 355 May 11 12:24 ansible_inputs.json
-rw-r--r-- 1 gigamon fmtaf 396 May 11 14:22 cups_inventory
-rw-r--r-- 1 gigamon fmtaf 172 May 11 12:10 fmInfo.yml
drwxr-xr-x 2 gigamon fmtaf 4096 May 11 14:48 host_vars

Every unique element under each group in the cups_inventory file needs to have a file, with the same name as the element, inside host_vars directory. This file has properties of the groups that it belongs to.
For Example
• | The element of name 'cpnUkLTE' under group 'CPN' has a file with name 'cpnUkLTE' inside host_vars directory. This file has the properties of the CPN. |
• | The element of name 'cluster-one' has a file with name 'cluster-one' inside host_vars directory. This has the properties of all the groups like Ports, IPInterface, ToolGroups, etc that the element is a member. |
Below are the templates of various host_vars files.
Prerequisite
---
validate_certs: false
Ports:
- port:
- 1/1/x1
- 1/1/x2
adminStatus: enable
type: network
GTPWhitelist:
- alias: gtp1
imsi: 310260564627811,310260564627812
state: present
- alias: gtp2
inputFile: './whitelistKeys/TenIMSIs_Valid.txt'
state: present
Gigastreams:
- alias: toolGS_C11
ports:
- 4/1/x1
- 4/1/x2
type: hybrid
state: present
- alias: toolGS_C12
ports:
- 4/1/x3..x4
type: hybrid
state: present
IPInterfaceSolution:
tags:
- tagKey: Location
tagValues:
- Chennai
- Pune
IPInterfaces:
- alias: dev1_IpIntCpn_1
applications:
- CPN
gateway: 10.1.1.1
interfaces:
- 1/1/x1
ipAddress: 10.2.3.4
ipMask: 255.255.255.0
clusterName : cluster-one
mtu: 1500
- alias: dev2_IpIntUpn
applications:
- UPN
gateway: 10.1.1.1
interfaces:
- 1/1/x2
ipAddress: 10.2.3.100
ipMask: 255.255.255.0
mtu: 1500
clusterName : cluster-one
state: present
ToolGroups:
- alias: pgGrp_C11
ports:
- 2/1/x1
smartLb: false
type: tool
state: present
- alias: pgGrp_C12
ports:
- 2/1/x2
smartLb: false
type: tool
state: present
Site
---
Site:
# Name of the site
alias: UK
# 'skipDeployment' attribute is 'false' for the sites that are intended to be deployed in the incremental deployment process
skipDeployment: true
# Tag values assigned to the site
tags:
- tagKey: Location
tagValues:
- UK
# All the tools used in the site
toolBindings:
- alias: GeoProbe
toolResourceType: GIGASTREAM
toolClusterId: cluster-one
toolResourceId: toolGS_C11
- alias: EEA
toolResourceType: PORTGROUP
toolClusterId: cluster-one
toolResourceId: pgGrp_C11
# Network Ports. Fill as needed in the format 'clusterid:portId'
networkPorts: []
# Policies under 'siteOverrideOfPolicyArrangements' override global policies
siteOverrideOfPolicyArrangements:
forLTE:
_file: /home/ddaniel/automationInventoryDirectory/host_vars/lte_policy_1.yml
for5G: {}
# Leave upNodes and cpNodes as empty
upNodes: []
cpNodes: []
cpNode
---
ProcessingNode:
# Name of the control processing node
alias: cpnUkLTE
# Tags assigned to the processing node
tags:
- tagKey: Dept
tagValues:
- IT
- Engg
# Type of control node. Possible values for the nodeType are: 'PCPN_LTE', 'PUPN', 'PCPN_5G'
nodeType: PCPN_5G
# Location of the gigasmart engine port assigned to the processing node
location:
clusterId: cluster-one
enginePorts:
- 2/3/e1
# IP interface that needs to be used by the processing node
ipInterfaceAlias: dev3_IpIntUpn_1
gtpControlSample: false
gtpRandomSampling:
enabled: false
# min: 12, max: 48, multiples of 12hrs
interval: 12
numberOfLteSessions: 100000
numberOf5gSessions: 100000
# GS Group HTTP2 port list
app5gHTTP2Ports:
- 8080
- 9000
# Network Ports. Fill as needed in the format 'clusterid:portId'
nodeOverrideNetworkPorts: []
trafficSources:
- networkFunctionName: cpn_pod1_SGW-C
networkFunctionType: SGW-C
tags:
- tagKey: Dept
tagValues:
- IT
- Engg
networkFunctionInterfaces:
- tunnelIdentifiers:
- interfaceTunnelIdentifierType: IPADDRESS
value: 255.255.255.0
netMask: 198.51.100.42
- interfaceTunnelIdentifierType: PORT
value: 8805
interfaceType: Sxa
# Network Ports. Fill as needed in the format 'clusterid:portId'
sourceOverrideNetworkPorts:
- 192.168.65.8:8/1/x3
#TCP loadbalancing properties applicable only for nodeType PCPN_5G
appTcp:
# Possible values for application are : 'broadcast', 'enhanced', 'drop'
application: broadcast
# Possible values for tcpControl are : 'broadcast', 'enhanced', 'drop'
tcpControl: broadcast
# To enable loadbalancing set value as true
loadBalance: false
upNode
---
ProcessingNode:
# Name of the control processing node
alias: upnDallas
# Tags assigned to the processing node
tags:
- tagKey: Dept
tagValues:
- IT
- Engg
# Type of user node. Possible values for the nodeType are: 'PCPN_LTE', 'PUPN', 'PCPN_5G'
nodeType: PUPN
# To make user node as standalone
standAloneMode: true
# Location of the gigasmart engine port assigned to the processing node
location:
clusterId: cluster-two
enginePorts:
- 6/3/e1
- 6/3/e2
# IP interface that needs to be used by the processing node
ipInterfaceAlias: dev3_IpIntUpn_1
gtpControlSample: false
gtpRandomSampling:
enabled: false
# min: 12, max: 48, multiples of 12hrs
interval: 12
# Network Ports. Fill as needed in the format 'clusterid:portId'
nodeOverrideNetworkPorts: []
trafficSources:
- networkFunctionName: upn_pod1_SGW-U
networkFunctionType: SGW-U
tags:
- tagKey: Dept
tagValues:
- IT
- Engg
networkFunctionInterfaces:
- tunnelIdentifiers:
- interfaceTunnelIdentifierType: IPADDRESS
mask: 255.255.255.0
address: 198.58.100.45
- interfaceTunnelIdentifierType: PORT
value: 8805
interfaceType: Sxa
# Network Ports. Fill as needed in the format 'clusterid:portId'
sourceOverrideNetworkPorts:
- 192.168.65.9:9/1/x4
- 192.168.65.9:9/1/x5
- networkFunctionName: upn_pod2_UPF
networkFunctionType: UPF
tags:
- tagKey: Dept
tagValues:
- IT
- Engg
networkFunctionInterfaces:
- tunnelIdentifiers:
- interfaceTunnelIdentifierValue:
mask: 255.255.255.0
address: 198.58.100.46
interfaceTunnelIdentifierType: IPADDRESS
- interfaceTunnelIdentifierValue:
value: '2152'
interfaceTunnelIdentifierType: PORT
interfaceType: N11
sourceOverrideNetworkPorts:
- 192.168.65.9:9/1/x6
5GPolicy
---
5GPolicy:
# gtpFlowTimeout is multiplied by 10 minutes to arrive at a timeout interval. (gtpFlowTimeout: 48 = 8 hours). Set this interval to match customer network's GTP session timeout for optimal results
gtpFlowTimeout: 48
# gtpPersistence -- save state tables during reboot or box failure. Remove if not using persistence
gtpPersistence:
# interval in minutes to save state table (min value is 10)
interval: 10
restartAgeTime: 30
fileAgeTimeout: 30
sampling:
flowMaps:
- alias : samplingMap2
rules:
- interface:
dnn: internet.miracle
pei: '*'
supi: 46*
gpsi:
nas_5qi:
tac:
nci:
plmndId:
nsiid:
# -- "controlPlanePercentage: specifies percentage of sampling at CPN (set to 100 for no CPN sampling)
controlPlanePercentage: 100
# -- userPlanePercentage: specifies percentage of sampling at UPN (will be applied to all UPNs -- use site override to set different rates per site)
userPlanePercentage: 50
tool: EEA
whitelisting:
whiteListAlias: gtp1
flowMaps:
- alias : whitelistMap2
rules:
- dnn: internet.miracle
interface:
- supi:
ran:
tool: EEA
loadBalancing:
# one of { flow5g }
appType: flow5g
# metric is the load balancing method -- one of { leastBw, leastPktRate, leastConn, leastTotalTraffic, roundRobin, wtLeastBw, wtLeastPktRate, wtLeastConn, wtLeastTotalTraffic, wtRoundRobin, flow5gKeyHash}
metric: flow5gKeyHash
# hashingKey -- ignored if metric is not of type 'flow5gKeyHash' -- one of { supi | pei| gpsi }
hashingKey: supi
LTEPolicy
---
LTEPolicy:
# gtpFlowTimeout is multiplied by 10 minutes to arrive at a timeout interval. (gtpFlowTimeout: 48 = 8 hours). Set this interval to match customer network's GTP session timeout for optimal results
gtpFlowTimeout: 48
# gtpPersistence -- save state tables during reboot or box failure. Remove if not using persistence
gtpPersistence:
# interval in minutes to save state table (min value is 10)
interval: 10
restartAgeTime: 30
fileAgeTimeout: 30
sampling:
flowMaps:
- alias : samplingMap1
rules:
- interface:
version: any
apn: internet.miracle
imei: 123*
imsi:
msisdn:
qci:
# -- controlPlanePercentage: specifies percentage of sampling at CPN (set to 100 for no CPN sampling)
controlPlanePercentage: 100
# -- userPlanePercentage: specifies percentage of sampling at UPN (will be applied to all UPNs -- use site override to set different rates per site)
userPlanePercentage: 50
tool: GeoProbe
whitelisting:
whiteListAlias: gtp1
flowMaps:
- alias : whitelistMap1
rules:
- version: v1
interface:
apn:
tool: GeoProbe
loadBalancing:
# one of { gtp | aft | tunnel }
appType: gtp
# metric is the load balancing method -- one of # { leastBw, leastPktRate, leastConn, leastTotalTraffic, roundRobin, wtLeastBw, wtLeastPktRate, wtLeastConn, wtLeastTotalTraffic, wtRoundRobin, gtpKeyHash}
metric: gtpKeyHash
# hashingKey -- ignored if metric is not of type 'gtpKeyHash' -- one of { imsi | imei | msisdn }
hashingKey: imsi
Tags
---
Tags:
- tagKey: Location
tagValues:
- San Francisco
- Dallas
Deployment of CUPS Solution
To deploy the CUPS solution, follow these steps:

For a Single GigaVUE-FM instance deployment, you must set an additional environment variable as follows.
export ANSIBLE_FM_IP=192.168.36.2
It searches the login details of the GigaVUE-FM IP in fmInfo.yml file.

- You can execute the playbook and deploy the CUPS solution using the following command:
- If the fmInfo file is encrypted, use the following command to execute and deploy CUPS solution:
/usr/bin/ansible-playbook -e ‘@~/cupsSolution/ansible_inputs.json’ -i ~/cupsSolution/cups_inventory /usr/local/share/gigamon-ansible/playbooks/cups/deploy_cups.yml
/usr/bin/ansible-playbook -e ‘@~/cupsSolution/ansible_inputs.json’ --ask-vault-pass -i ~/cupsSolution/cups_inventory /usr/local/share/gigamon-ansible/playbooks/cups/deploy_cups.yml
Note: The multiple YML files created inside the host_vars are concatenated, converted into JSON format and sent to GigaVUE-FM.
Features Supported for CUPS solution in Ansible
Ansible supports the following features for CUPS in addition to creating, updating or deleting a CUPS solution across Single or Multiple GigaVUE-FM:
- Deployment Report
- Check Mode (Without deploying the solution to FM, will get you the difference between the current state of FM and Proposed Solution)
- Saving Golden Payload (Last Successful payload)
- Applying Golden Payload
- Using the gigamon_cups module in your own playbook by sending the complete yaml payload.

Everytime a CUPS solution is deployed/updated/deleted, a deployment report is generated in path declared in the ansible_inputs.json file.
The deployment report provides the following information:
- timestamp – Date, and Time information when the the report was generated.
- deploymentRequest - Values can be NEW, EDIT, or DELETE.
- deploymentResponse – It is the GigaVUE-FM Response for the payload sent. GigaVUE-FM response contains, multi status such as created, updated, deleted, skipped, or failed.
- deploymentStatus – It is the Ansible Response which had changed or failed or skipped, and the message attribute.
- deploymentPayloadDiff – It is generated for the updates made in the solution (i.e , EDIT in deploymentRequest). It contains the difference between the last payload in the GigaVUE-FM and currently deployed payload.
Sample Deployment Report in YAML
A sample deployment report for solution edit scenario is as follows:
deployment PayloadDiff
deploymentPayloadDiff:
updated:
- path: //trafficPolicies/LTE/whitelisting/flowMaps/wlMapGeoProbe/rules/rule_1/apn
old_value: apn.vodafone.com
new_value: apn.airtel.com
- path: //trafficPolicies/LTE/sampling/flowMaps/samplingMapToeEEA/rules/rule_1/userPlanePercentage
old_value: 50
new_value: 25
removed:
- path: //sites/SanFrancisco/cpNodes/CPN_01/trafficSources/cpn1_pod2/networkElementFunctionInterfaces/S11/tunnelIdentifiers/VLAN_16777214
value:
interfaceTunnelIdentifierValue:
value: 16777214
interfaceTunnelIdentifierType: VLAN
created:
- path: //sites/SanFrancisco/cpNodes/CPN_01/trafficSources/cpn1_pod2/networkElementFunctionInterfaces/S11/tunnelIdentifiers/VLAN_16777215
value:
interfaceTunnelIdentifierValue:
value: 16777215
interfaceTunnelIdentifierType: VLAN
deploymentRequest: EDIT
deploymentStatus:
msg: CUPS solution cupsMixed1 updated
failed: false
skipped: false
changed: true
deploymentResponse:
deleted: []
failed: []
updated:
- status: SUCCESS
alias: UPN_01
clusterId: 10.115.54.83
objectType: UPN
- status: SUCCESS
alias: CPN_01
clusterId: 10.115.54.83
objectType: CPN
skipped: []
created: []
timeStamp: 09-Mar-2020::12:04:1583780672

The checkmode feature allows you to find the difference between the configuration that is already present in the GigaVUE-FM against the payload that you are trying to apply without applying the new payload in GigaVUE-FM. This allows you to quickly find the components that gets affected if the new configuration is applied.
The check mode can be enabled by adding --check to the command that triggers the playbook.
/usr/bin/ansible-playbook --check -e ‘@~/cupsSolution/ansible_inputs.json’ -i ~/cupsSolution/cups_inventory /usr/local/share/gigamon-ansible/playbooks/cups/deploy_cups.yml
The output of --check is a Deployment Report that contains the difference in configuration as shown:
Deployment Report Checkmode
Deployment_Report_Checkmode
deploymentPayloadDiff:
updated:
- path: //trafficPolicies/LTE/whitelisting/flowMaps/wlMapInternetToEEA/rules/rule_1/interface
old_value: S11
new_value: s11
- path: //trafficPolicies/LTE/whitelisting/flowMaps/wlMapGeoProbe/rules/rule_1/apn
old_value: apn.airtel.com
new_value: apn.vodafone.com
removed: []
created: []
deploymentRequest: EDIT
deploymentResponse: CHECK MODE
timeStamp: 09-Mar-2020::12:12:1583781162

It is also possible to restore the configuration from the generated Golden Payload file using one of the following two commands:
/usr/bin/ansible-playbook -e ‘@~/cupsSolution/ansible_inputs.json’ -e 'applyGP=True' -i ~/cupsSolution/cups_inventory /usr/local/share/gigamon-ansible/playbooks/cups/deploy_cups.yml
/usr/bin/ansible-playbook -e ‘@~/cupsSolution/ansible_inputs.json’ -e 'applyGP=True' --ask-vault-pass -i ~/cupsSolution/cups_inventory /usr/local/share/gigamon-ansible/playbooks/cups/deploy_cups.yml
It is also possible to enable checkmode to find the difference between the configuration on GigaVUE-FM and the configuration of the Golden Payload. The command to do is:
/usr/bin/ansible-playbook --check -e ‘@~/cupsSolution/ansible_inputs.json’ -e 'applyGP=True' -i ~/cupsSolution/cups_inventory /usr/local/share/gigamon-ansible/playbooks/cups/deploy_cups.yml
-
/usr/bin/ansible-playbook --check -e ‘@~/cupsSolution/ansible_inputs.json’ -e 'applyGP=True' --ask-vault-pass -i ~/cupsSolution/cups_inventory /usr/local/share/gigamon-ansible/playbooks/cups/deploy_cups.yml
Note: It searches the golden payload file to reapply the configuration in the default location if the file path is not defined in the ansible_inputs.json file.

When a parameter called input_payload needs to be added to the playbook, the easiest way is to make the golden_payload_path parameter in the ansible_inputs.json file point to the input YAML file that you would like to apply and follow the steps in Reapplying Golden Payload.