Basic Configurations

While Red Hat Enterprise Linux AS can be configured in a variety of ways, the configurations can be broken into two major categories:

High-Availability Clusters Using Red Hat Cluster Manager

High-availability clusters based on Red Hat Cluster Manager utilize two Linux servers or nodes and a shared storage device to enhance the availability of key services on the network. Each of these key services in the cluster is assigned its own virtual server IP address (VIP). The VIP address, or floating IP, is an IP address that is distinct from the either node's normal IP address and is associated with the service rather than any particular machine in the cluster. If a monitored service on one of the nodes fails, then that node is removed from the cluster and the remaining server starts the appropriate services — maintaining their floating IP addresses and causing minimal disruption to the end user. This procedure is called failover.

Each node in a Red Hat Cluster Manager high-availability cluster must have access to a shared storage device for two reasons:

Having access to the same data source helps Red Hat Cluster Manager more effectively handle failover situations because after a failure occurs the functional node's newly activated services have access to the exact same data used by the failed node. However, to protect the integrity of data on shared devices, services within a high-availability cluster are only allowed to run on one node at any given time.

Red Hat Cluster Manager's use of shared storage also gives administrators flexibility in how they use each node in the cluster. For example, one can either run different services on each server — a configuration known as active-active — or run all services on one node while the other sits idle — a configuration known as hot-standby.

The shared storage device in a Red Hat Cluster Manager cluster also enables each node to verify the health of the other by regularly updated status information on mutually accessible quorum disk partitions. If the quorum partition is not updated properly by a member of the cluster, the other node can verify the integrity of that member by pinging it through a heartbeat channel. Heartbeat channels can be configured on one or more Ethernet interfaces or a serial connection or on both interfaces concurrently.

For more information about configuring Red Hat Cluster Manager clusters, please see the accompanying manual titled Red Hat Cluster Manager Installation and Administration Guide.

Load-Balancing Clusters Using Linux Virtual Servers

To the outside world, an LVS cluster appears as one server, but in reality, a user from the World Wide Web who is accessing a group of servers behind a pair of redundant LVS routers.

An LVS cluster consists of at least two layers. The first layer is composed of a pair of similarly configured Linux machines or nodes. One of these nodes acts as an LVS router, directing requests from the Internet to the second layer — a pool of servers called real servers. The real servers provide the critical services to the end-user while the LVS router balances the load to these servers.

For a detailed overview of LVS clustering, see Chapter 6.

Notes

[1]

Quorum partitions are small raw devices used by each node in Red Hat Cluster Manager to check the health of the other node. See Red Hat Cluster Manager Installation and Administration Guide for more details.