PRTG Manual: Cluster
A cluster consists of two or more PRTG core servers that work together to form a high availability monitoring system. With all PRTG on premises licenses, you can have a cluster with two PRTG core servers.
This feature is not available in PRTG hosted by Paessler.
A cluster consists of at least two cluster nodes: one master node and one or more failover nodes, where up to four failover nodes are possible. Each cluster node is a full PRTG core server installation that could perform all of the monitoring and alerting on its own.
Cluster nodes are connected to each other with two TCP/IP connections. They communicate in both directions and a single cluster node only needs to connect to one other cluster node to integrate itself into the cluster.
During normal operation, you configure devices, sensors, and all other monitoring objects on the master node. The master node automatically distributes the configuration among all other cluster nodes in real time.
All devices that you create on the cluster probe are monitored by all cluster nodes, so data from different perspectives is available and monitoring always continues, even if one of the cluster nodes fails. If the master node fails, one of the failover nodes takes over and controls the cluster until the master node is back. This ensures fail-safe monitoring and continuous data collection.
A cluster works in active-active mode. This means that all cluster nodes permanently monitor the network according to the common configuration received from the master node. Each cluster node stores the results in its own database. The storage of monitoring results is also distributed among the cluster. You need to install PRTG updates on one cluster node only. This cluster node automatically deploys the new version to all other cluster nodes.
If downtime or threshold breaches are discovered by one or more cluster nodes, only one installation, either the primary master node or the failover master node, sends out notifications (for example, via email, SMS text message, or push message). Because of this, you are not flooded with notifications from all cluster nodes in case failures occur.
During the outage of a cluster node, it cannot collect monitoring data. The data of this single cluster node shows gaps. However, monitoring data for this time span is still available on the other cluster nodes. There is no functionality to actually fill these gaps with the data of other cluster nodes.
Because the monitoring configuration is managed centrally, you can only change it on the master node. However, you can review the monitoring results if you log in to the PRTG web interface of any of the failover nodes in read-only mode.
If you use remote probes in a cluster, each remote probe connects to each cluster node and sends the data to all cluster nodes. You can define the Cluster Connectivity of each remote probe in its settings, section Probe Administrative Settings.
As a consequence of this concept, monitoring traffic and load on the network is multiplied by the number of used cluster nodes. Moreover, the devices on the cluster probe are monitored by all cluster nodes, so the monitoring load increases on these devices.
This is not a problem for most usage scenarios, but consider the detailed system requirements. As a rule of thumb, each additional cluster node means that you have to divide the number of sensors that you can use by two.
More than 5,000 sensors per cluster are not officially supported. Contact our Presales team if you exceed this limit. For possible alternatives to a cluster, see the Knowledge Base: Are there alternatives to the cluster when running a large installation?
For more information, see section Failover Cluster Configuration.
What's the clustering feature in PRTG?
In which web interface do I log in to if the master node fails?
What are the bandwidth requirements for running a cluster?
Are there alternatives to the cluster when running a large installation?
How to connect PRTG through a firewall in 4 steps
How to set up a PRTG cluster