About Cluster Roles

Communication with a GigaVUE‑OS cluster is accomplished using a leader leader in clustering node relationship (formerly master) virtual IP address assigned to the cluster as a whole. Physically, the virtual IP address resolves to only a single leader at any one time. However, the leader role on the GigaVUE‑OS node is not statically assigned to a single node. Instead, any node (except GigaVUE TA Series and the nodes residing on a different management subnet) in the cluster can take on the leader role if the situation requires it (for example, if both the leader and the current standby nodes go down).

When a new node becomes the leader, it takes ownership of the virtual IP address used for leader access to the cluster. Because all of the nodes in the cluster share the same database, this transition takes place seamlessly, ensuring that the cluster survives the failure of one or more nodes.

The virtual IP address is assigned to the primary control card in the configuration jump-start wizard:

Step xx: Cluster mgmt virtual IP address and masklen? [0.0.0.0/0]

Each node in the cluster is performing one of the following roles at any given time:

■   Leader – This node has possession of the cluster’s virtual IP (VIP) address and takes responsibility for dispatching commands to the entire cluster.
■   Standby – This node takes over the leader role in the event of a failure on the node currently holding the role.
■   Normal – These nodes perform normal GigaVUE operations with minimal cluster responsibilities. However, they, too, have a complete copy of the cluster’s database. When a leader fails and standby is promoted to be the new leader, an election process takes place automatically between all normal nodes, ensuring that a new standby is found.

Setting a Node’s Priority in the Leader Election Process

Clusters of GigaVUE‑OS nodes perform a leader election in the following situations:

■   Cluster reload
■   Leader or standby node failure

In either of these cases, a new node is selected to perform the necessary role(s). You can set the cluster leader preference for each individual node in the cluster to specify how likely the node is to claim a leader/standby role. Higher values are more likely to claim the leader/standby role; lower values are less likely.

Use preference settings from 10 to 100 for leader, standby, and normal roles. Use preference settings from 1 to 9 for normal nodes that are excluded from taking the leader or standby role.

In software version 4.5, the preference cannot be set to zero. A node with a preference of 0 in an earlier software version will be changed to 1 after upgrading to 4.5 or higher.

GigaVUE‑OS sets defaults for the preference argument based on the type of control card in use. If you choose to change a node’s preference setting, it is generally preferable to set higher priorities for nodes with more processing power. GigaVUE‑HC3, or GigaVUE‑HC2 nodes provide the most processing power, followed by GigaVUE‑HC1 nodes, followed by GigaVUE TA Series nodes.

Note:  All GigaVUE TA Series nodes including the white box, will automatically be added to a cluster with preference set to 1 because any Traffic Aggregator can never take the role of, or be eligible to be, the leader.

Note: The Clustering Daemon (Clusterd) restarts ,if the"no card slot 1/4 down force" command is executed after performing a cluster reload.

In addition, in an event of a cluster reboot, any GigaVUE TA Series node in a cluster may show as standby for a couple of minutes while the cluster is coming up from the reboot cycle. However once the cluster is up and running, none of the GigaVUE TA Series nodes can be a standby.

About the “Unknown” Cluster Role

In addition to the standard roles in About Cluster Roles, the system may occasionally report a node operating with an unknown cluster role. A node with an unknown cluster role is no longer being actively managed by the leader.

When a node that was formerly part of a cluster transitions to an unknown cluster role, its database will typically be out of synchronization with the leader’s. You can restore the node to the cluster by using the reset factory keep-all-config command, followed by a reboot, and running configuration jump-start to rejoin the cluster with a clean local database.