Node in Inband Cluster

The following procedure is to replace the control card on a GigaVUE‑HC3 node in an inband cluster.

Use the show chassis command to record the GigaVUE‑HC3 chassis box ID and the show cluster config command to record the cluster ID, cluster name, cluster leader address vip, cluster leader preference, and cluster interface.

To replace a control card in a GigaVUE‑HC3 that is in an inband cluster:

  1. On the cluster leader, backup the entire configuration on the cluster to a text file.

(config) # configuration text generate active running upload <ftp | tftp | scp | sftp>://<upload URL>/<profilename.txt>

  1. On the cluster leader, remove the configuration of the GigaVUE‑HC3 node from the cluster database. The commands may vary depending on your configuration.

(config) # no map all(config) # no gsop all(config) # no vport all(config) # no tunnel all(config) # no gsgroup all(config) # no stack alias <alias>

  1. On the GigaVUE‑HC3 node, issue the following commands:

(config) # no cluster enable(config) # write memory

  1. On the cluster leader, remove the GigaVUE‑HC3 node information from the cluster database. Answer YES when prompted. This removes the GigaVUE‑HC3 from the cluster.

(config) # no chassis box-id <box ID>

WARNING !! All the cards, ports, and traffic configuration will be lost.

Enter 'YES' to confirm this operation: YES

  1. Replace the control card by following the procedure for Standalone Node up to and including Step 11. (Since the GigaVUE‑HC3 has been removed from the cluster, it is a standalone node.)
  2. When the GigaVUE‑HC3 is back up, reinstate the original cluster configuration including the cluster ID, cluster name, cluster leader address vip, cluster leader preference, and cluster interface, as well as the chassis box ID. Then reinstate the interface configuration, chassis and card configuration, and GigaStream configuration used for the inband cluster stack links.

The cluster configuration is as follows:

(config) # cluster id <cluster ID>(config) # cluster name <cluster name>(config) # cluster leader address vip <cluster leader vip>(config) # cluster interface <interface>(config) # cluster leader preference <preference number for leader, standby, or normal>

The interface configuration is as follows:

(config) # no interface eth1 zeroconf(config) # no interface eth2 zeroconf(config) # interface inband zeroconf

The chassis and card configuration is as follows:

(config) # chassis box-id <box ID>(config) # card slot <slot number>

The GigaStream configuration is as follows:

(config) # port <box ID>/<slot number>/<stack ports> params admin enable(config) # port <box ID>/<slot number>/<stack ports> type stack(config) # gigastream alias <GigaStream alias> port-list <box ID>/<slot number>/<stack ports>

  1. On the cluster leader, configure the GigaVUE‑HC3. The offline provisioning includes chassis, card, port, and GigaStream stack port configuration. The chassis configuration includes the chassis box ID, serial number, and GigaVUE‑HC3 with node type of hc3.

(config) # chassis box-id <box ID> serial-num <serial number> type hc3(config) # card slot <box ID>/<slot number> product-code <card product code>(config) # port <box ID>/<slot number>/<stack ports> params admin enable(config) # port <box ID>/<slot number>/<stack ports> type stack(config) # gigastream alias <GigaStream alias> port-list <box ID>/<slot number>/<stack ports>

Note:  Step configures the GigaVUE‑HC3 node. Step configures the cluster leader. The stack ports in Step are the same as those in Step , under GigaStream configuration.

  1. On the GigaVUE‑HC3 node, issue the following command for the node to rejoin the cluster:

(config) # cluster enable

  1. When the GigaVUE‑HC3 node is in the cluster, from the cluster leader apply the previously saved configuration:

(config) # configuration text fetch <http | https | ftp | tftp | scp | sftp>://<download URL>/<profilename.txt> apply fail-continue verbose

  1. Verify that the information on the cluster matches the previously saved configuration for chassis, cards, and traffic using the following CLI commands or others, depending on your configuration:

(config) # show version(config) # show cluster global(config) # show chassis(config) # show cardss(config) # show map(config) # show map stats all