cluster

Required Command-Line Mode = Enable or Configure

Use the cluster command to create and manage clusters. A cluster is a group of GigaVUE HC Series nodes operating as a unified fabric with packets entering a port on one node capable of being sent to any destination port on another node.

Refer to the “Creating and Managing Clusters” section in the GigaVUE Fabric Management Guide for details on setting up all aspects of a cluster.

Note: If you rename a cluster using the GigaVUE-OS CLI, the rename does not reflect in the GigaVUE-FM. You cannot rename a cluster from GigaVUE-FM. Refer to the GigaVUE Fabric Management Guide for details.

The easiest way to configure a cluster is with the config jump-start script described in the Hardware Installation Guide. This script walks you through the configuration of the essential commands required to create a cluster, such as the Cluster ID, Cluster Name, and Cluster Management IP Address (a virtual IP address used to access the leader, no matter which physical node is performing that role at the current time).

The cluster command has the following syntax:

cluster
   enable
   id <cluster ID>
   interface <interface>    
  
   

   leader
      address
         primary ip <cluster leader IP> [port <leader port number>]
         secondary ip <cluster leader IP> [port <leader port number>]
         vip <cluster leader vip> <netmask | mask length>
      auto-discovery
      connect timeout <seconds>
      interface <interface>
      preference <1-100>
      yield
   name <cluster name>
   port <cluster port number>
   reload [box-id <box ID>] | [force] | [node-id <node ID>]
   reload sequential
   remove <node ID>
   shared-secret <shared secret>
   shutdown
   startup-time <cluster startup time (secs)>

 

The following table describes the arguments for the cluster command:

Argument

Description

enable

Enables cluster support for the node as follows:

If the currently specified cluster ID does not match an existing cluster, creates a new cluster with this node becoming the leader.
If the currently specified cluster ID matches an existing cluster, the node joins the existing cluster.

For example:

(config) # cluster enable

To disable cluster support for the node, meaning that the node will leave the cluster, use the following:

(config) # no cluster enable

id <cluster ID>

Specifies the cluster ID for the node. When joining an existing cluster, configure the cluster ID for the node to match the existing cluster’s ID.

The cluster ID can contain up to 32 alphanumeric characters and can include the hyphen (-) special character.

For example:

(config) # cluster id 100

interface <interface>

<IPv4 | IPv6>

Specifies the interface for the cluster. The interface can be eth0 (the Management port), eth1, (the dedicated cluster Management port on GigaVUE‑HC1), GigaVUE‑HC2, and GigaVUE‑HC3or inband.

For example:

(config) # cluster interface eth1

Note:  All nodes in a GigaVUE HC Series cluster must use the same interface.

Only the eth0 interface is supported for Layer 3 out-of-band manual discovery.

Specify the interface as either IPv4 or IPv6 on top of which the devices communicate with each other. Default value is IPv4. If the interface is not specified, it will be considered as IPv4.

Establishes a server-client relationship between the leader and the member nodes of the cluster and checks if all devices in the cluster are reachable and if there are any firewall or routing issues.on

Changes the communication protocol version from IPv4 to IPv6 and vice-versa

leader
   address
      primary ip <cluster leader IP>
         [port <leader port number>]
      secondary ip <cluster leader IP>
         [port <leader port number>]
      vip <cluster leader vip> <netmask |          mask length>
   auto-discovery
   connect timeout <seconds>
   interface <interface>
   preference <1-100>
   yield

Sets options relating to the leader in the cluster. The leader role on the GigaVUE HC Series is not statically assigned to a single node. Instead, another node in the cluster can take on the leader role if the situation requires it (for example, if both the leader and the current standby nodes go down). When a new node becomes the leader, it takes ownership of the virtual IP address used for leader access to the cluster.

Use the leader argument to set the following options:

address primary ip—Specifies the IP address used by the leader in the cluster to allow nodes on a different subnet to manually discover the cluster leader. This is the address used to join the cluster.

For example:

(config) # cluster leader address primary ip 192.168.1.52 port 60102

address secondary ip—Specifies the IP address used by the standby node in the cluster to allow nodes on a different subnet to manually discover the standby or the potential leader of the cluster.

For example:

(config) # cluster leader address secondary ip 192.168.1.54 port 60102

address vip—Specifies the virtual IP address and netmask or mask length used by the node in the cluster performing the leader role. This is the address you use to access the cluster. Only IPv4 address is supported for the VIP. Note that IPv4 is used for communication between the nodes in a cluster.

Examples:

(config) # cluster leader address vip 192.168.1.25 /24

auto-discovery—Enables auto-discovery of the cluster leader. By default, auto-discovery is enabled.

For example:

(config) # cluster leader auto-discovery

To allow nodes on a different subnet to manually discover the cluster, set auto-discovery to no.

For example:

(config) # no cluster leader auto-discovery

connect timeout—Specifies the time available for a node residing on a different subnet to discover a new leader to allow nodes on a different subnet to manually discover the cluster. When a leader fails and the standby is promoted to the new leader, the node is allowed to discover the new leader within the time specified in the timeout value. The default is 15 seconds. The values range from 10 to 120 seconds.

For example:

(config) # cluster leader connect timeout 30

This parameter applies to nodes on a different subnet to allow them to join a cluster.

interface—Specifies the ethx interface to be used for cluster management traffic for the virtual IP. The valid values are eth0 and eth1.

Note:  Clustering is not supported on eth2 interfaces.

For example:

(config) # cluster leader interface eth1

leader  preference <1-100>   yield

(continued)

preference—Specifies how likely a node is to claim the leader role during the leader contention process (for example, across a cluster reload). Higher values are more likely to claim the leader role; lower values are less likely.

The cluster leader preference can be configured to a preference value between 1 and 100. Set higher preference values for nodes with more processing power.

Use settings from 10 to 100 for leader, standby, and normal roles. Use preference settings from 1 to 9 for normal nodes that are excluded from taking the leader or standby role.

Starting in software version 4.5, the preference cannot be set to 0. A node with a preference of 0 in an earlier software version will be changed to 1 after an upgrade to 4.5 or higher.

For example:

(config) # cluster leader preference 80

yield—Yields the current leader role to the node performing the standby role. If you are not sure which node is currently performing the standby role, use show cluster global brief to see the list of all the nodes in the cluster, including their current role.

For example:

(config) # cluster leader yield

name <cluster name>

Specifies the cluster name. This is the cluster-level equivalent of a hostname. It must match for all nodes in a cluster.

The cluster name can contain up to 64 alphanumeric characters and can include the hyphen (-) special character.

For example:

(config) # cluster name cluster-100

port <port number>

Specifies the service port number used for the cluster. The port specified must match for all nodes in the cluster.

The range of numeric values for the port is from 1025 to 65535.

For example:

(config) # cluster port 60102

reload
   box-id <box ID>
   force
   node-id <node ID>

Reloads/reboots either the entire cluster or a specified node in the cluster, as follows:

Reboot the entire cluster with cluster reload.
Reload a specified node by specifying either its box ID or its node ID. You can see a list of these values for all nodes in the cluster with the show cluster global brief command.
Use the force argument to force an immediate reboot.

For example:

(config) # cluster reload box-id 14

reload sequential

Reloads all the nodes in a cluster in a sequential order.

This command may take additional time to reload the nodes as compared to the cluster reload command because when you run the cluster reload sequential command, the leader waits for each of the nodes in the cluster to reload and join the cluster again.

Note:  It is recommended that you use this command, instead of the cluster reload command, when you want to reload the nodes in an Inband cluster.

For example:

(config) # cluster reload sequential

remove <node ID>

Removes the specified node from the cluster using the node ID. The remove argument can only be used when logged in to the leader, either directly or through the VIP address.

For example:

(config) # cluster remove 20

shared-secret <shared secret>

Specifies the shared secret used for message authentication between all nodes in the cluster. The secret must match across all nodes.

The shared secret can be from 16 to 64 alphanumeric characters and can include special characters, such as !, @, #, $, %, ^, &, *, (, ), _, and +. The default value is the following string:

1234567890123456

For example:

(config) # cluster shared-secret MyShared1234567890

shutdown

Puts all nodes in the cluster in a down state (similar to reload halt). The shutdown argument can only be used when logged in to the leader, either directly or through the VIP address.

For example:

(config) # cluster shutdown

startup-time <cluster startup time (secs)>

Specifies the maximum number of seconds allowed for cluster startup.

The range of numeric values for the startup time is from 0 to 2147483647 seconds. The default is 180 seconds.

For example:

(config) # cluster startup-time 360

Related Commands

The following table summarizes other commands related to the cluster command:

Task

Command

Displays cluster information for a specified box.

# show cluster box-id 1

Displays global cluster configuration state.

# show cluster configured

Displays global cluster run state.

# show cluster global

Displays global cluster run state in table format.

Use this CLI command on the leader, standby, or normal node to display the maximum (Max) and Used cost units across a cluster

# show cluster global brief

Displays cluster history log.

# show cluster history

Displays cluster history log for a specified box.

# show cluster history box-id 1

Displays local cluster run state.

# show cluster local

Displays error status of local node.

# show cluster local error-status

Displays run state information about the leader.

# show cluster leader

Displays information about a node.

# show cluster node 1

Displays run state information about the standby node.

# show cluster standby

Leaves the cluster.

(config) # no cluster enable

Resets cluster ID to the default.

(config) # no cluster id

Resets interface to the default for cluster service.

(config) # no cluster interface

Resets the cluster leader primary IP address to the default.

(config) # no cluster leader address primary ip

Resets the cluster leader secondary IP address to the default.

(config) # no cluster leader address secondary ip

Resets the cluster leader virtual IP address (VIP) to the default.

(config) # no cluster leader address vip

Disables cluster leader auto-discovery.

(config) # no cluster leader auto-discovery

Resets cluster leader interface to the default.

(config) # no cluster leader interface

Resets the cluster name to the default.

(config) # no cluster name

Resets the cluster service port to the default.

(config) # no cluster port

Does not authenticate messages.

(config) # no cluster shared-secret

Resets cluster startup time to the default.

(config) # no cluster startup-time