How to Setup Inband Cluster Management on a New Cluster

This example illustrates how to configure a four-node Inband cluster with zeroconf feature enabled. This example covers GigaVUE H Series nodes in a cluster. To add a GigaVUE TA Series node or a Certified Traffic Aggregation White Box, refer to Setting up Inband Cluster Management with GigaVUE TA Series (Including a White Box) on page 868.

Before you start, identify the node that will be the leader. Also identify the nodes that will be targeted as standby within the cluster.

Note:  GigaVUE TA Series nodes and white boxes with GigaVUE‑OS can only be configured as normal nodes.

In this example, Seattle is the leader.

The nodes to be configured in the Inband cluster are:

Node Number

Node Name

Node Type

1

Seattle

GigaVUE-HC3

2

Washington

GigaVUE-HC3

3

Boston

GigaVUE-HC1

Note:  The control card is embedded.

4

San Francisco

GigaVUE-HC2

To configure the Inband Cluster Management, you must maintain a command shell for the leader as well as target nodes due to the offline configuration that needs to be applied to leader.

Configuration Steps for Leader: Seattle

1. Open an SSH or terminal session to the Seattle node.

Part 1: Using the Jump-Start Wizard to Configure Node 1

2. In config, enter configuration jump-start to start the jump-start wizard:

gigamon-0d0024 > enable

gigamon-0d0024 # configure terminal

gigamon-0d0024 (config) # configuration jump-start

GigaVUE‑OS configuration wizard

3. Enter the parameter values to configure the leader.

Step 1: Hostname? [gigamon-0d0024] Seattle

Step 2: Management interface <eth0 eth2 eth3>? [eth0]

Step 3: Use DHCP on eth0 interface? no

Step 4: Use zeroconf on eth0 interface? [no]

Step 5: Primary IPv4 address and masklen? [0.0.0.0/0] 10.150.52.6/24

Step 6: Default gateway? 10.150.52.1

Step 7: Primary DNS server? 192.168.2.20

Step 8: Domain name? gigamon.com

Step 9: Enable IPv6? [yes]

Step 10: Enable IPv6 autoconfig (SLAAC) on eth0 interface? [no]

Step 11: Enable DHCPv6 on eth0 interface? [no]

Step 12: Enable secure cryptography? [no]

Step 13: Enable secure passwords? [no]

Step 14: Minimum password length? [8]

Step 15: Admin password?

Please enter a password. Password is a must.

Step 15: Admin password?

Step 15: Confirm admin password?

Note:  In Step 16, accept the default of No so that you do not enable the cluster.

Step 16: Cluster enable? [no]

Note:  In Step 17, assign the box ID of your chassis.

Step 17: Box-id for the chassis? [1] 7

Note:  To change the answers in the jump-start wizard, enter the step number that you want to change. Click Enter to save changes and exit.

Choice:

Configuration changes saved.

System in classic mode

Seattle (config) #

Part 2: Configuring Inband Cluster on the Leader

4. You need to disable the zeroconf feature on the default cluster interface on eth1 of the control card (HCCv2) in the Seattle node, and make cluster interface Inband with relevant cluster information.

Seattle (config) # no interface eth1 zeroconf

Seattle (config) # cluster interface inband

Seattle (config) # cluster id 600

Seattle (config) # cluster name 600

Seattle (config) # cluster leader address vip 10.150.52.233 /24

Seattle (config) # interface inband zeroconf

Seattle (config) #

5. Enter show interfaces to perform a confirmation check.
6. Make sure that no IP address is assigned on eth1 and new IP address is auto assigned for Inband interface.

Seattle (config) # show interfaces

.

.

.

Interface eth1 status:

Comment:

Admin up: yes

Link up: yes

DHCP running: no

IP address:

Note:  The IP address field on eth1 should be empty.

Netmask:

IPv6 enabled: no

Speed: 1000Mb/s (auto)

.

.

.

Interface inband status:

Comment:

Admin up: yes

Link up: yes

DHCP running: no

IP address: 169.254.51.255

Note:  The IP address field is automatically assigned.

Netmask: 255.255.0.0

Seattle (config) #

7. Enter show cluster configured to display the current cluster configuration.

Seattle [600: leader] (config) # show cluster configured

Global cluster config:

Cluster enabled: no

Cluster ID: 600

Cluster name: 600

Cluster control interface: inband

Note:  The cluster control interface is set to Inband.

Cluster port: 60102

Cluster expected nodes: 2

Cluster startup time: 180

Cluster shared secret: 1234567890123456

Cluster leader preference: 60

Cluster leader auto-discovery enabled: yes

Cluster leader manual port: 60102

Cluster leader virtual IP address: 10.150.52.233/24

Cluster leader management interface: eth0

Seattle [600:leader] (config) #

Part 3: The Configured Leader is Ready for Inband Cluster

8. Enter the card slot and number command. If the designated stack port is located at slot 4, then wait for the card at slot 4 to come “up” to the “oper state.”

Seattle (config) # card slot 7/4

9. Enter show card to perform a confirmation check.

Seattle [600: leader] (config) # show card

10. Assign and enable the stack ports. Make them as a stack GigaStream.
11. Enable the cluster.

Note:  The Seattle node is now the leader as indicated in the CLI prompt.

Seattle (config) # port 7/4/x5..x20 type stack

Seattle (config) # port 7/4/x5..x20 params admin enable

Seattle (config) # gigastream alias big_bridge_7to4 port 7/4/x5..x20

Seattle (config) # cluster enable

Seattle [600: leader] (config) #

Part 4: Apply Offline Remote Node Configuration on the Leader Node

12. Apply offline remote node configuration on the leader node as shown in the CLI command to conclude leader setup.

Seattle [600: leader] (config) # chassis box-id 8 serial-num 12340 type hc3

Seattle [600: leader] (config) # card slot 8/1 product-code 132-0087

Seattle [600: leader] (config) # port 8/1/x5..x20 type stack

! Box '8' is down, unable to validate SFP type for stack port.

Note:  The box number is down, unable to validate SFP type for stack port message is expected behavior.

.

.

.

Seattle [600: leader] (config) # port 8/1/x5..x20 params admin enable

Seattle [600: leader] (config) # gigastream alias big_bridge_8to7 port 8/1/x5..x20

Seattle [600: leader] (config) # write memory

1Enter show running-config to perform a confirmation check.

Seattle [600: leader] (config) # show running-config

Configuration Steps for Standby Node: Washington

1.   Open an SSH or terminal session to the Washington node.

Part 1: Using the Jump-Start Wizard to Configure Node 2

2. In the command shell for the Washington node, enter the following commands to start the jump-start wizard:
o   enable
o   configure terminal
o   configuration jump-start

Gigamon GigaVUE‑OS Chassis

System in classic mode

gigamon-040077 > enable

gigamon-040077 # configure terminal

gigamon-040077 (config) # configuration jump-start

GigaVUE‑OS configuration wizard

2Enter the parameter values to configure the standby node.

Step 1: Hostname? [gigamon-040077] Washington

Step 2: Management interface <eth0 eth2 eth3>? [eth0]

Step 3: Use DHCP on eth0 interface? no

Step 4: Use zeroconf on eth0 interface? [no] no

Step 5: Primary IPv4 address and masklen? [0.0.0.0/0] 10.150.52.8/24

Step 6: Default gateway? 10.150.52.1

Step 7: Primary DNS server? 192.168.2.20

Step 8: Domain name? gigamon.com

Step 9: Enable IPv6? [yes] yes

Step 10: Enable IPv6 autoconfig (SLAAC) on eth0 interface? [no] no

Step 11: Enable DHCPv6 on eth0 interface? [no] no

Step 12: Enable secure cryptography? [no]

Step 13: Enable secure passwords? [no]

Step 14: Minimum password length? [8]

Step 15: Admin password)?

Please enter a password. Password is a must.

Step 15: Admin password?

Step 15: Confirm admin password?

Note:  In Step 16, accept the default of No so that you do not enable the cluster.

Step 16: Cluster enable? [no] no

Note:  In Step 17, the value 8 indicates the box ID that you assign. Assign your box ID.

Step 17: Box-id for the chassis? [1] 8

Note:  To change the answers in the jump-start wizard, enter the step number that you want to change. Click Enter to save changes and exit.

Choice:

Configuration changes saved.

System in classic mode

Part 2: Configure Inband Cluster on the Remote Target Node 2

3. You need to disable the zeroconf feature on the default cluster interface on eth1 of the control card (HCCv2) in the Washington node and make cluster interface Inband with relevant cluster information.
4. Enter the parameter values to disable the zeroconf feature as shown in the CLI example.

Washington (config) # no interface eth1 zeroconf

Note:  The zeroconf is disabled on the default cluster interface of HCCv2 (eth1).

Washington (config) # cluster interface inband

Washington (config) # cluster id 600

Washington (config) # cluster name 600

Washington (config) # cluster leader address vip 10.150.52.233 /24

Washington (config) # interface inband zeroconf

Washington (config) # card slot 8/1

Washington (config) # port 8/1/x5..x20 type stack

Washington (config) # port 8/1/x5..x20 params admin enable

Washington (config) # gigastream alias big_bridge_8to7 port 8/1/x5..x20

Washington (config) # wr mem

5. Enter show interfaces to perform a confirmation check.
6. Make sure that there is no IP address assigned for eth1 and that IP address is auto assigned for Inband interface.

Washington (config) # show interfaces

.

.

.

Interface eth1 status:

Comment:

Admin up: yes

Link up: yes

DHCP running: no

IP address:

Note:  The IP address field is NULL for eth1.

Netmask:

IPv6 enabled: no

.

.

.

Interface inband status:

Comment:

Admin up: yes

Link up: yes

DHCP running: no

IP address: 169.254.228.191

Netmask: 255.255.0.0

IPv6 enabled: yes

.

.

.

Washington (config) #

7. Enter show cluster configured to display the cluster configuration settings.
8. Make sure that the Inband value is defined in the cluster control interface.

Washington (config) # show cluster configured

Global cluster config:

Cluster enabled: no

Cluster ID: 600

Cluster name: 600

Cluster control interface: inband

Note:  The cluster control interface is set to Inband.

Cluster port: 60102

Cluster expected nodes: 1

Cluster startup time: 180

Cluster shared secret: 1234567890123456

Cluster leader preference: 50

Cluster leader auto-discovery enabled: yes

Cluster leader manual port: 60102

Cluster leader virtual IP address: 10.150.52.233/24

Cluster leader management interface: eth0

Washington (config) #

Washington (config) # show port params port 8/1/x5

Parameter 8/1/x5

====================== ===============

Name Alias:

Type: stack

Admin: enabled

Link status: up

Note:  The Link Status indicates that the stack port is “up” state.

Auto Negotiate: off

Duplex: full

Speed (Mbps): 10000

MTU: 9400

Force Link Up: off

...

9. Ping the Washington Inband interface.

Seattle [600: leader] (config) # ping 169.254.228.191

PING 169.254.228.191 (169.254.228.191) 56(84) bytes of data.

64 bytes from 169.254.228.191: icmp_seq=1 ttl=64 time=2.10 ms

64 bytes from 169.254.228.191: icmp_seq=2 ttl=64 time=0.153 ms

64 bytes from 169.254.228.191: icmp_seq=3 ttl=64 time=0.145 ms

64 bytes from 169.254.228.191: icmp_seq=4 ttl=64 time=0.135 ms

Part 3: Enable the Cluster or Remote Target Node

10. Configure the cluster role of Node 2 to be “standby”.

Washington (config) # cluster enable

Washington [600: unknown] (config) #

Note:  The Washington node is in an unknown transitional state.

Washington [600: standby] (config) #

Note:  The Washington node is now a standby and the joining with the leader is complete.

11. On the leader command shell, enter show chassis to perform a confirmation check if the chassis are in the “up” and “oper status”.

Seattle [600: leader] (config) # show chassis

Box# Hostname Config Oper Status HW Type Product# Serial# HW Rev SW Rev

----------------------------------------------------------------------

7 * Seattle yes up HC3-Chassis 132-0098 80016 A0 3.2.00

8 Washington yes up HC3-Chassis 132-0098 12340 AA 3.2.00

Seattle [600: leader] (config) #

Configuration Steps for Node 3: Boston

You will test the third node joining action. For two nodes Inband cluster setup, apply all configuration values including the joining node on the leader.

You will then configure the remote target node. For the third node to join, the leader must already have the second node configuration information. You need to preserve this portion of the configuration information on the leader. Therefore append the additional third node configuration on the top of the existing information.

1.   Open an SSH or terminal session to the Boston node.

Part 1: Using the Jump-Start Wizard to Configure Node 3

2. In the command shell for the Washington node, enter the following commands to start the jump-start wizard:
o   enable
o   configure terminal
o   configuration jump-start

Gigamon GigaVUE‑OS Chassis

System in classic mode

gigamon-0d0025 > enable

gigamon-0d0025 # configure terminal

gigamon-0d0025 (config) # configuration jump-start

3. Enter the parameter values to configure the target node.

GigaVUE‑OS configuration wizard

Do you want to use the wizard for initial configuration? yes

Step 1: Hostname? [gigamon-0d0025] Boston

Step 2: Management interface? [eth0]

Step 3: Use DHCP on eth0 interface? no

Step 4: Use zeroconf on eth0 interface? [no]

Step 5: Primary IPv4 address and masklen? [0.0.0.0/0] 10.150.52.20/24

Step 6: Default gateway? 10.150.52.1

Step 7: Primary DNS server? 192.168.2.20

Step 8: Domain name? gigamon.com

Step 9: Enable IPv6? [yes]

Step 10: Enable IPv6 autoconfig (SLAAC) on eth0 interface? [no]

Step 11: Enable DHCPv6 on eth0 interface? [no]

Step 12: Enable secure cryptography? [no]

Step 13: Enable secure passwords? [no]

Step 14: Minimum password length? [8]

Step 15: Admin password?

Please enter a password. Password is a must.

Step 15: Admin password?

Step 15: Confirm admin password?

Note:  In Step 16, accept the default No.

Step 16: Cluster enable? [no]

Note:  In Step 17, assign a box ID for node 3.

Step 17: Box-id for the chassis? [1] 21

Note:  To change the answers in the jump-start wizard, enter the step number that you want to change. Click Enter to save changes and exit.

Choice:

Configuration changes saved.

To return to the wizard from the CLI, enter the "configuration jump-start" command from configure mode. Launching CLI...

System in classic mode

Boston > enable

Boston # configure terminal

Boston (config) #

Part 2: Configure Inband Cluster on the Remote Target Node 3

4. You need to disable the zeroconf feature on the default cluster interface on eth1 of the control card (HCCv2) in the Boston node and make cluster interface Inband with relevant cluster information.
5. Enter the parameter values to disable the zeroconf feature as shown in the CLI example.

Boston (config) # no interface eth1 zeroconf

Note:  The zeroconf is disabled on eth1.

Boston (config) # cluster interface inband

Boston (config) # cluster id 600

Boston (config) # cluster name 600

Boston (config) # cluster leader address vip 10.150.52.233 /24

Boston (config) # interface inband zeroconf

6. Enter show interfaces to perform a confirmation check.
7. Make sure that there is no IP address assigned for eth1 and that IP address is auto assigned for Inband interface.
8. Make sure that the cluster control interface displays Inband value.

Boston (config) # show interfaces

.

.

.

Interface eth1 status:

Comment:

Admin up: yes

Link up: yes

DHCP running: no

IP address:

Note:  The IP address field is NULL for eth1.

Netmask:

IPv6 enabled: no

.

.

.

Interface inband status:

Comment:

Admin up: yes

Link up: yes

DHCP running: no

IP address: 169.254.145.136

Note:  The IP address field is automatically assigned.

Netmask: 255.255.0.0

IPv6 enabled: yes

.

.

.

Boston (config) #

Boston (config) # show cluster configured

Global cluster config:

Cluster enabled: no

Cluster ID: 600

Cluster name: 600

Cluster control interface: inband <-- inband

Note:  The cluster control interface is set to Inband.

Cluster port: 60102

Cluster expected nodes: 1

Cluster startup time: 180

Cluster shared secret: 1234567890123456

Cluster leader preference: 60

Cluster leader auto-discovery enabled: yes

Cluster leader manual port: 60102

Cluster leader virtual IP address: 10.150.52.233/24

Cluster leader management interface: eth0

Boston (config) #

Part 3: Configure Relevant Stack Ports and Node 3 Configuration on the Leader

9. On the leader command shell, configure local stack ports on the leader. Enter the configuration information as shown.

Seattle [600: leader] (config) # card slot 4/2

Seattle [600: leader] (config) # port 4/2/x5..x6 type stack

Seattle [600: leader] (config) # port 4/2/x5..x6 params admin enable

Seattle [600: leader] (config) # gigastream alias smaller_bridge_4to21 port 4/2/x5..x6

3Configure offline stack port for Node 3.

Seattle [600: leader] (config) # chassis box-id 21 serial-num 40263 type hc2

Seattle [600: leader] (config) # card slot 21/3 product-code 132-0045

Seattle [600: leader] (config) # port 21/3/x1..x2 type stack

! Box '21' is down, unable to validate SFP type for stack port.

Seattle [600: leader] (config) # port 21/3/x1..x2 params admin enable

Seattle [600: leader] (config) # gigastream alias smaller_bridge_21to7 port 21/3/x1..x2

10. Enter show running-config to perform a confirmation check.

Seattle [600: leader] (config) # show running-config

##

Part 4: Configure Stack Ports for Joining Node 3

11. In the command shell for Node 3, enter the stack port configuration information.

Boston (config) # card slot 21/3

Boston (config) # port 21/3/x1..x2 type stack

Boston (config) # port 21/3/x1..x2 params admin enable

Boston (config) # gigastream alias smaller_bridge_21to7 port 21/3/x1..x2

12. Enter show port params to perform a confirmation check.

Boston (config) # show port params port 21/3/x1..x2

Parameter 21/3/x1 21/3/x2

====================== =============== ===============

Name Alias:

Type: stack stack

Note:  The stack values indicates the state of the port.

Admin: enabled enabled

Link status: up up

Note:  The Link Status indicates the port’s status. In this case, it is “up”.

Auto Negotiate: off off

Duplex: full full

Speed (Mbps): 10000 10000

MTU: 9400 9600

Force Link Up: off off

Port Relay: N/A N/A

...

4On the command shell for the leader, ping the Washington node Inband interface.

Seattle [600: leader] (config) # ping 169.254.145.136

PING 169.254.145.136 (169.254.145.136) 56(84) bytes of data.

64 bytes from 169.254.145.136: icmp_seq=1 ttl=64 time=3.44 ms

64 bytes from 169.254.145.136: icmp_seq=2 ttl=64 time=0.157 ms

Part 5: Enable Cluster on the Joining Node 3

13. Enter cluster enable in the command shell on the Boston node.

Boston (config) # cluster enable

Boston [600: unknown] (config) #

Note:  The transitional state is unknown.

Boston [600: normal] (config) #

Note:  The normal state indicates that the standby node has completed joining.

14. On the command shell for the leader, enter show chassis to ensure all chassis in the “up” state.

Seattle [600: leader] (config) # show chassis

Box# Hostname Config Oper Status HW Type Product# Serial# HW Rev SW Rev

-----------------------------------------------------------------------------

7 * Seattle yes up HC3-Chassis 132-0098 80016 A0 3.2.00

8 Washington yes up HC3-Chassis 132-0098 12340 AA 3.2.00

21 Boston yes up HC1-Chassis 132-00A2 40263 A1 3.2.00

Seattle [600: leader] (config) #

15. On the command shell for the leader, enter show card to display all line cards in the Inband cluster. Make sure all the line cards are listed.

Note:  The show card command displays all three nodes Inband cluster formation.

Configuration Steps for Node 4: San Francisco

1.   Configure node 4, Sanfrancisco as normal in the Inband cluster.
2. Open an SSH or terminal session to the Sanfrancisco node.

Part 1: Using the Jump-Start Wizard to Configure Node 1

3. In the command shell for the Sanfrancisco node, enter the following commands to start the jump-start wizard:
o   enable
o   configure terminal
o   configuration jump-start

gigamon-0d000f > enable

gigamon-0d000f # configure terminal

gigamon-0d000f (config) # configuration jump-start

GigaVUE‑OS configuration wizard

4. Enter configuration information for Node 4.

Gigamon GigaVUE‑OS

GigaVUE‑OS configuration wizard

Do you want to use the wizard for initial configuration? yes

Step 1: Hostname? [gigamon-0d000f] Sanfrancisco

Step 2: Management interface? [eth0]

Step 3: Use DHCP on eth0 interface? no

Step 4: Use zeroconf on eth0 interface? [no]

Step 5: Primary IPv4 address and masklen? [0.0.0.0/0] 10.150.52.22/24

Step 6: Default gateway? 10.150.52.1

Step 7: Primary DNS server? 192.168.2.20

Step 8: Domain name? gigamon.com

Step 9: Enable IPv6? [yes]

Step 10: Enable IPv6 autoconfig (SLAAC) on eth0 interface? [no]

Step 11: Enable DHCPv6 on eth0 interface? [no]

Step 12: Enable secure cryptography? [no]

Step 13: Enable secure passwords? [no]

Step 14: Minimum password length? [8]

Step 15: Admin password?

Please enter a password. Password is a must.

Step 15: Admin password?

Step 15: Confirm admin password?

Note:  In Step 16, accept the default of No so that the cluster is not enabled.

Step 16: Cluster enable? [no]

Step 17: Box-id for the chassis? [1] 22

Note:  To change the answers in the jump-start wizard, enter the step number that you want to change. Click Enter to save changes and exit.

Choice:

Configuration changes saved.

To return to the wizard from the CLI, enter the "configuration jump-start" command from configure mode. Launching CLI...

System in classic mode

Sanfrancisco > enable

Sanfrancisco # configure terminal

Sanfrancisco (config) #

Part 2: Configure the Inband Cluster on the Remote Target Node 4

Node 4, Sanfrancisco, does not have a default cluster interface on GigaVUE-HB1, therefore you do not need to disable zeroconf feature like you would with the other nodes.

Sanfrancisco (config) # cluster interface inband

Sanfrancisco (config) # cluster id 600

Sanfrancisco (config) # cluster name 600

Sanfrancisco (config) # cluster leader address vip 10.150.52.233 /24

Sanfrancisco (config) # interface inband zeroconf

Sanfrancisco (config) #

5. Enter the following command to perform a confirmation check.

Sanfrancisco (config) # show interfaces inband

Interface inband status:

Comment:

Admin up: yes

Link up: yes

DHCP running: no

IP address: 169.254.179.192

Netmask: 255.255.0.0

.

.

.

Sanfrancisco (config) #

Sanfrancisco (config) # show cluster configured

Global cluster config:

Cluster enabled: no

Cluster ID: 600

Cluster name: 600

Cluster control interface: inband

Note:  The cluster control interface indicates that the cluster is Inband.

Cluster port: 60102

Cluster expected nodes: 1

Cluster startup time: 180

Cluster shared secret: 1234567890123456

Cluster leader preference: 40

Cluster leader auto-discovery enabled: yes

Cluster leader manual port: 60102

Cluster leader virtual IP address: 10.150.52.233/24

Cluster leader management interface: eth0

Sanfrancisco (config) #

Part 3: Configure Relevant Stack Ports and Offline Node 4 Configuration Information

6. Configure the stack ports in the cluster.

Seattle [600: leader] (config) # card slot 8/5

Seattle [600: leader] (config) # port 8/5/x1 type stack

Seattle [600: leader] (config) # port 8/5/x1 params admin enable

7. Configure the offline stack port configuration for Node 4.

Seattle [600: leader] (config) # chassis box-id 22 serial-num B0020 type hb1

Seattle [600: leader] (config) # card slot 22/1 product-code 132-00AF

Seattle [600: leader] (config) # port 22/1/x3 type stack

! Box '22' is down, unable to validate SFP type for stack port.

Note:  The Box number is down, unable to validate SFP type for stack port message is expected behavior.

Seattle [600: leader] (config) # port 22/1/x3 params admin enable

Seattle [600: leader] (config) #

8. Enter show running-config to perform a confirmation check.

Part 4: Configure the Stack Port for the Joining Node 4

9. On the command shell for Node 4, enter the configuration information.

Sanfrancisco (config) # card slot 22/1

Sanfrancisco (config) # port 22/1/x3 type stack

Sanfrancisco (config) # port 22/1/x3 params admin enable

Sanfrancisco (config) #

5Enter show port params to perform a confirmation check.

Sanfrancisco (config) # show port params port 22/1/x3

Parameter 22/1/x3

====================== ===============

Name Alias:

Type: stack

Note:  The stack value indicates the Node 4 port state.

Admin: enabled

Link status: up

Note:  The Link Status value indicates that the port is “up”.

Auto Negotiate: off

Duplex: full

Speed (Mbps): 10000

MTU: 9400

Force Link Up: off

Port Relay: N/A

...

10. On the command shell for the leader, ping the Washington node Inband interface.

Seattle [600: leader] (config) # ping 169.254.179.192

PING 169.254.179.192 (169.254.179.192) 56(84) bytes of data.

64 bytes from 169.254.179.192: icmp_seq=1 ttl=64 time=1.81 ms

64 bytes from 169.254.179.192: icmp_seq=2 ttl=64 time=0.155 ms

64 bytes from 169.254.179.192: icmp_seq=3 ttl=64 time=0.136 ms

Part 5: Enable the Cluster on the Joining Node 4

11. Enter cluster enable in the command shell of Node 4.

Sanfrancisco (config) # cluster enable

Sanfrancisco [600: unknown] (config) #

Note:  The transitional state is unknown.

Sanfrancisco [600: normal] (config) #

Note:  The “normal” value indicates that the standby node is complete in joining.

12. Enter show chassis to display all the chassis in the cluster is in the up state.

Seattle [600: leader] (config) # show chassis

Box# Hostname Config Oper Status HW Type Product# Serial# HW Rev SW Rev

-----------------------------------------------------------------------------

7 * Seattle yes up HC3-Chassis 132-0098 80016 A0 3.2.00

8 Washington yes up HC3-Chassis 132-0098 12340 AA 3.2.00

21 Boston yes up HC2-Chassis 132-00A2 40263 A1 3.2.00

22 Sanfrancisco yes up HC1-Chassis 132-00B1 B0020 3.6 3.2.00

Seattle [600: leader] (config) #

13. On the command shell of the leader, enter show card to display all line cards in the Inband cluster.

Seattle [600: leader] (config) # show card

Seattle [600: leader] (config) #

Seattle [600: leader] (config) # write memory

Note:  The write memory command commits the information to the leader database.

Node 4, San Francisco, is completed for Inband cluster formation.