High Availability configurations are possible for each of the following critical RTView components: Historian, Data Server, Display Server, and Servlets. Redundant components can be set up to provide backup failover capability for each component or for any individual component considered at risk.
Backup components may be run in hot or warm standby mode. In hot standby mode all global variable definitions, as well as cache and alert definitions, are loaded and activated at start up. In warm standby mode none of these actions are performed, thereby avoiding the overhead of maintaining Alert and Cache data sources until the backup component has become the primary.
Note: User interactions with Alerts such as Alert Acknowledgment are not maintained when a backup component becomes the primary -- this limitation will be resolved in an upcoming release.
This section includes:
§ “High Availability Historian” on page 875
§ “High Availability Deployments” on page 875
The Historian is an application you can configure to store time-stamped data, derived from either raw real-time data or aggregated/transformed data, in a relational database of choice. To provide high availability, it is possible to designate backup Historians to support a failover event. See “Configuring Failover on the Historian” for details. This high availability solution is intended to be used with a database system that also supports redundancy (through mirroring, clustering, or other techniques) so that any Historian in the redundant group can update the same virtual database.
Details for other high availability configurations depend on the type of deployment selected. The following pages explore five deployment options detailing basic configurations versus high availability configurations. Where applicable, examples are provided on how to configure high availability deployments involving web servers and application servers. The examples below use Apache and Tomcat, although other web servers and application servers can be used.
§ “Data Server via Direct Socket”
§ “Display Server and Data Server”
This deployment is chosen when it is more desirable to have a Java application to view RTView dashboards as opposed to a browser-based interface. Direct socket connections are used when there are no firewall issues between clients and the Data Server component.
In this high availability configuration, as shown below, if the Data Server connection is lost or unavailable, the Display Viewer Application will switch to a backup server. All backup servers should have access to the same data sources and configuration files.
Primary and backup Data Servers can be configured in the Display Builder, by selecting the Data Server tab in the Application Options dialog, or with the Configuration Utility. See the “High Availability” Data Servers section for details.
This deployment is chosen when it is more desirable to have a Java application to view RTView dashboards as opposed to a browser-based interface. HTTP connections are used when firewall issues exist between clients and the Data Server component.
In this high availability configuration, as shown below, if the primary Data Server connection (http://appserver1:8080/rtvdata) is lost or unavailable, the Display Viewer Application will switch to the backup Data Server connection. Failover to the backup connection will also occur if the primary rtvdata servlet loses its connection to the Data Server (MyServerHost1:3278).
Primary and backup Data Servers can be configured in the Display Builder, by selecting the Data Server tab in the Application Options dialog, or with the Configuration Utility. See the “High Availability” Data Servers section for details.
The nodes labeled AppServer 1 and MyServer 1 on the diagram could be the same physical node in an actual deployment. Likewise for AppServer2 and MyServer2.
This deployment is chosen when a browser deployment is desired, data updates must be displayed faster than once a second, and Java can be installed on the clients.
In this high availability configuration, as shown below, a single URL is configured for the applet's connection to the rtvdata servlet. This is because the applet security requires that it can only make connections to the host specified in the URL by which it is loaded. The URL specifies a web server that is also a load balancer. A load balancer allows a set of redundant servlets to be accessed by a single URL, directing requests to the servlet that is the least busy. If a servlet fails, the load balancer switches any existing client connections to a different servlet and does not direct any new requests to the failed servlet.
Note in the diagram below, that each rtvdata servlet has a primary and backup data server connection. This is useful when the servlet is available, but its primary data server has failed. Note also that both servlets have the same primary data server. This ensures that, during normal operation, a client will see the same data server state regardless of which servlet the load balancer chooses.
Primary and backup Data Servers can be configured in the Display Builder, by selecting the Data Server tab in the Application Options dialog, or with the Configuration Utility. See the “High Availability” Data Servers section for details.
High Availability Configuration with Apache and Tomcat
The diagram and example that follow detail how to configure this high-availability deployment with Tomcat 6.0.18 and Apache 2.2.9:
1. On BackendHost1, in Tomcat's server.xml file, modify the Engine entry by adding a jvmRoute:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1">
2. On BackendHost2:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2">
3. Deploy the rtvdata servlet to Tomcat on BackendHost1 and BackendHost2.
4. On the host running the Apache webserver (load-balancer), make the following changes to the httpd.conf file to enable load balancing:
§ Un-comment the LoadModule line for all mod_proxy modules: proxy_module, proxy_ajp_module, proxy_balancer_module, proxy_connect_module, proxy_ftp_module, proxy_http_module.
§ Define a proxy for the balancer group of rtvdata servlets. The name of this proxy (rtvdata) is the name by which clients will access the servlets.
# proxy for rtvdata load balancer group
ProxyPass /rtvdata balancer://rtvdata_group stickysession=JSESSIONID
§ Define the workers (servlets) for the rtvdata balancer group. The default tomcat ajp connector on port 8009 is used. Note that the value of "route" for each BalancerMember must match the entry in the corresponding server.xml entry for tomcat, as described above.
# list of actual workers for rtvdata_group
# (route must match jvmRoute in Engine entry in server.xml
# for the correpsonding tomcat instance!)
<Proxy balancer://rtvdata_group>
BalancerMember ajp://BackendHost1:8009/rtvdata route=tomcat1
BalancerMember ajp://BackendHost2:8009/rtvdata route=tomcat2
</Proxy>
§ For monitoring, enable the apache balancer-manager webapp. This webapp can be viewed in a browser via http://localhost/balance-manager.
# enable balance-manager
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
This deployment is used when a browser interface is desired and nothing needs be installed on the client except a browser. Optionally, the Flash plug-in must be installed if Fx objects are used.
This high availability configuration, as shown below, is perfect for smaller deployments where the Display Server acts to serve client display requests and also acts as the Data Server. See the “High Availability Display Servers” section for configuration details.
Note: If there are fifty or more potential concurrent users, a large usage of cache or alert definitions, or large amounts of data are being aggregated before they are pushed across a network to be displayed, it is better to opt for a high availability deployment where the Display Server is used in conjunction with the Data Server. See “Display Server and Data Server” for more information.
Note in the diagram below, that the load is distributed to multiple application servers running rtvservlet. Regardless of which servlet is receiving the request, each servlet points to the same primary Display Server. This is to preserve the state of the data, which may include Cache and Alert states
High Availability Configuration with Apache and Tomcat
The diagram and example that follow detail how to configure this high-availability deployment with Tomcat 6.0.18 and Apache 2.2.9.
1. On BackendHost1, in Tomcat's server.xml file, modify the Engine entry by adding a jvmRoute attribute:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1">
2. On BackendHost2:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2">
3. On both BackendHost1 and BackendHost2, un-comment the following line in Tomcat's server.xml file:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
This enables tomcat clustering. Client session information is shared between the redundant rtvdisplay servlets. So, if the rtvdisplay login feature is enabled, clients won't need to login again when a servlet failover occurs. See http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html for more information.
4. In rtvdisplay/web.xml, un-comment the following line so that the rtvdisplay servlet will use clustering:
<distributable/>
5. Use make_war.bat to rebuild rtvdisplay.war, then deploy it on BackendHost1 and BackendHost2.
6. On the host running the Apache webserver (load-balancer), make the following changes to the httpd.conf file to enable load balancing:
§ Un-comment the LoadModule line for all mod_proxy modules: proxy_module, proxy_ajp_module, proxy_balancer_module, proxy_connect_module, proxy_ftp_module, proxy_http_module.
§ Define a proxy for the balancer group of rtvdata servlets. The name of this proxy (rtvdisplay) is the name by which clients will access the servlets.
# proxy for rtvdisplay load balancer
ProxyPass /rtvdisplay balancer://rtvdisplay_cluster stickysession=JSESSIONID
Note: use same name (rtvdisplay)# as actual workers, otherwise JSESSIONID path may not be found by browser.
§ Define the workers (servlets) for the rtvdisplay balancer group. The default tomcat ajp connector on port 8009 is used. Note that the value of "route" for each BalancerMember must match the entry in the corresponding server.xml entry for tomcat, as described above.
# list of actual workers for rtvdisplay_cluster
# (route must match jvmRoute in Engine entry in server.xml
# for the correpsonding tomcat instance!)
<Proxy balancer://rtvdisplay_cluster>
BalancerMember ajp://BackendHost1:8009/rtvdisplay route=tomcat1
BalancerMember ajp://BackendHost2:8009/rtvdisplay route=tomcat2
</Proxy>
7. For monitoring, enable the apache balancer-manager webapp. This webapp can be viewed in a browser via http://localhost/balance-manager.
# enable balance-manager
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
Display Server and Data Server
This deployment is used when a browser interface is desired and nothing need be installed on the client except a browser. Optionally, the Flash plug-in must be installed if Fx objects are used. See “Fx Graphs” for more information.
This high availability configuration, as shown below, is ideal for larger deployments when there are fifty or more potential concurrent users, a large usage of cache or alert definitions, and/or large amounts of data are being aggregated before they are pushed across a network to be displayed. The Display Server acts to serve client display requests, but Data Server(s) provide data access, data aggregation, data caching and alert rule execution. See the “High Availability Display Servers” and “High Availability” Data Servers sections for configuration details.
Note in the diagram below, that the load is distributed to multiple application servers running rtvservlet. Display Servers are stateless when utilized strictly for handling client requests and serving up real-time dashboards, therefore it is possible to balance the load between multiple Display Servers in order to handle large numbers of concurrent client users. Notice that each load balanced Display Server points to the same Data Server; this is to preserve the state of the data which may include Cache and Alert states.
High Availability Configuration with Apache and Tomcat
The diagram and example that follow detail how to configure this high-availability deployment with Tomcat 6.0.18 and Apache 2.2.9.
1. On BackendHost1, in Tomcat's server.xml file, modify the Engine entry by adding a jvmRoute attribute:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1">
2. On BackendHost2:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2">
3. On both BackendHost1 and BackendHost2, uncomment the following line in Tomcat's server.xml file:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
This enables tomcat clustering. Client session information is shared between the redundant rtvdisplay servlets. So, if the rtvdisplay login feature is enabled, clients won't need to login again if a servlet failover occurs. For more info see http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html
4. In rtvdisplay/web.xml, uncomment the following line so that the rtvdisplay servlet will use clustering:
<distributable/>
5. Use make_war.bat to rebuild rtvdisplay.war, then deploy it on BackendHost1 and BackendHost2.
6. On the host running the Apache webserver (load-balancer), make the following changes to the httpd.conf file, to enable load balancing:
§ Un-comment the LoadModule line for all mod_proxy modules: proxy_module, proxy_ajp_module, proxy_balancer_module, proxy_connect_module, proxy_ftp_module, proxy_http_module.
§ Define a proxy for the balancer group of rtvdisplay servlets. The name of this proxy (rtvdisplay) is the name by which clients will access the servlets.
# proxy for rtvdisplay load balancer.
ProxyPass /rtvdisplay balancer://rtvdisplay_cluster stickysession=JSESSIONID
Note: use same name (rtvdisplay)# as actual workers, otherwise JSESSIONID path may not be found by browser.
§ Define the workers (servlets) for the rtvdisplay balancer group. The default tomcat ajp connector on port 8009 is used. Note that the value of "route" for each BalancerMember must match the entry in the corresponding server.xml entry for tomcat, as described above.
# list of actual workers for rtvdisplay_cluster
# (route must match jvmRoute in Engine entry in server.xml
# for the correpsonding tomcat instance!)
<Proxy balancer://rtvdisplay_cluster>
BalancerMember ajp://BackendHost1:8009/rtvdisplay route=tomcat1
BalancerMember ajp://BackendHost2:8009/rtvdisplay route=tomcat2
</Proxy>
§ For monitoring, enable the apache balancer-manager webapp. This webapp can be viewed in a browser via http://localhost/balance-manager.
# enable balance-manager
<Location /balancer-manager>
SetHandler balancer-manager
</Location>