<html><head><META http-equiv="Content-Type" content="text/html; charset=iso-8859-1"><title>Apache Tomcat 7 (7.0.77) - Clustering/Session Replication HOW-TO</title><meta name="author" content="Filip Hanik"><meta name="author" content="Peter Rossbach"><style type="text/css" media="print"> | |
.noPrint {display: none;} | |
td#mainBody {width: 100%;} | |
</style><style type="text/css"> | |
code {background-color:rgb(224,255,255);padding:0 0.1em;} | |
code.attributeName, code.propertyName {background-color:transparent;} | |
table { | |
border-collapse: collapse; | |
text-align: left; | |
} | |
table *:not(table) { | |
/* Prevent border-collapsing for table child elements like <div> */ | |
border-collapse: separate; | |
} | |
th { | |
text-align: left; | |
} | |
div.codeBox pre code, code.attributeName, code.propertyName, code.noHighlight, .noHighlight code { | |
background-color: transparent; | |
} | |
div.codeBox { | |
overflow: auto; | |
margin: 1em 0; | |
} | |
div.codeBox pre { | |
margin: 0; | |
padding: 4px; | |
border: 1px solid #999; | |
border-radius: 5px; | |
background-color: #eff8ff; | |
display: table; /* To prevent <pre>s from taking the complete available width. */ | |
/* | |
When it is officially supported, use the following CSS instead of display: table | |
to prevent big <pre>s from exceeding the browser window: | |
max-width: available; | |
width: min-content; | |
*/ | |
} | |
div.codeBox pre.wrap { | |
white-space: pre-wrap; | |
} | |
table.defaultTable tr, table.detail-table tr { | |
border: 1px solid #CCC; | |
} | |
table.defaultTable tr:nth-child(even), table.detail-table tr:nth-child(even) { | |
background-color: #FAFBFF; | |
} | |
table.defaultTable tr:nth-child(odd), table.detail-table tr:nth-child(odd) { | |
background-color: #EEEFFF; | |
} | |
table.defaultTable th, table.detail-table th { | |
background-color: #88b; | |
color: #fff; | |
} | |
table.defaultTable th, table.defaultTable td, table.detail-table th, table.detail-table td { | |
padding: 5px 8px; | |
} | |
p.notice { | |
border: 1px solid rgb(255, 0, 0); | |
background-color: rgb(238, 238, 238); | |
color: rgb(0, 51, 102); | |
padding: 0.5em; | |
margin: 1em 2em 1em 1em; | |
} | |
</style></head><body bgcolor="#ffffff" text="#000000" link="#525D76" alink="#525D76" vlink="#525D76"><table border="0" width="100%" cellspacing="0"><!--PAGE HEADER--><tr><td><!--PROJECT LOGO--><a href="http://tomcat.apache.org/"><img src="./images/tomcat.gif" align="right" alt=" | |
The Apache Tomcat Servlet/JSP Container | |
" border="0"></a></td><td><h1><font face="arial,helvetica,sanserif">Apache Tomcat 7</font></h1><font face="arial,helvetica,sanserif">Version 7.0.77, Mar 28 2017</font></td><td><!--APACHE LOGO--><a href="http://www.apache.org/"><img src="./images/asf-logo.svg" align="right" alt="Apache Logo" border="0" style="width: 266px;height: 83px;"></a></td></tr></table><table border="0" width="100%" cellspacing="4"><!--HEADER SEPARATOR--><tr><td colspan="2"><hr noshade size="1"></td></tr><tr><!--LEFT SIDE NAVIGATION--><td width="20%" valign="top" nowrap class="noPrint"><p><strong>Links</strong></p><ul><li><a href="index.html">Docs Home</a></li><li><a href="http://wiki.apache.org/tomcat/FAQ">FAQ</a></li><li><a href="#comments_section">User Comments</a></li></ul><p><strong>User Guide</strong></p><ul><li><a href="introduction.html">1) Introduction</a></li><li><a href="setup.html">2) Setup</a></li><li><a href="appdev/index.html">3) First webapp</a></li><li><a href="deployer-howto.html">4) Deployer</a></li><li><a href="manager-howto.html">5) Manager</a></li><li><a href="realm-howto.html">6) Realms and AAA</a></li><li><a href="security-manager-howto.html">7) Security Manager</a></li><li><a href="jndi-resources-howto.html">8) JNDI Resources</a></li><li><a href="jndi-datasource-examples-howto.html">9) JDBC DataSources</a></li><li><a href="class-loader-howto.html">10) Classloading</a></li><li><a href="jasper-howto.html">11) JSPs</a></li><li><a href="ssl-howto.html">12) SSL/TLS</a></li><li><a href="ssi-howto.html">13) SSI</a></li><li><a href="cgi-howto.html">14) CGI</a></li><li><a href="proxy-howto.html">15) Proxy Support</a></li><li><a href="mbeans-descriptors-howto.html">16) MBeans Descriptors</a></li><li><a href="default-servlet.html">17) Default Servlet</a></li><li><a href="cluster-howto.html">18) Clustering</a></li><li><a href="balancer-howto.html">19) Load Balancer</a></li><li><a href="connectors.html">20) Connectors</a></li><li><a href="monitoring.html">21) Monitoring and Management</a></li><li><a href="logging.html">22) Logging</a></li><li><a href="apr.html">23) APR/Native</a></li><li><a href="virtual-hosting-howto.html">24) Virtual Hosting</a></li><li><a href="aio.html">25) Advanced IO</a></li><li><a href="extras.html">26) Additional Components</a></li><li><a href="maven-jars.html">27) Mavenized</a></li><li><a href="security-howto.html">28) Security Considerations</a></li><li><a href="windows-service-howto.html">29) Windows Service</a></li><li><a href="windows-auth-howto.html">30) Windows Authentication</a></li><li><a href="jdbc-pool.html">31) Tomcat's JDBC Pool</a></li><li><a href="web-socket-howto.html">32) WebSocket</a></li></ul><p><strong>Reference</strong></p><ul><li><a href="RELEASE-NOTES.txt">Release Notes</a></li><li><a href="config/index.html">Configuration</a></li><li><a href="api/index.html">Tomcat Javadocs</a></li><li><a href="servletapi/index.html">Servlet Javadocs</a></li><li><a href="jspapi/index.html">JSP 2.2 Javadocs</a></li><li><a href="elapi/index.html">EL 2.2 Javadocs</a></li><li><a href="websocketapi/index.html">WebSocket 1.1 Javadocs</a></li><li><a href="http://tomcat.apache.org/connectors-doc/">JK 1.2 Documentation</a></li></ul><p><strong>Apache Tomcat Development</strong></p><ul><li><a href="building.html">Building</a></li><li><a href="changelog.html">Changelog</a></li><li><a href="http://wiki.apache.org/tomcat/TomcatVersions">Status</a></li><li><a href="developers.html">Developers</a></li><li><a href="architecture/index.html">Architecture</a></li><li><a href="funcspecs/index.html">Functional Specs.</a></li><li><a href="tribes/introduction.html">Tribes</a></li></ul></td><!--RIGHT SIDE MAIN BODY--><td width="80%" valign="top" align="left" id="mainBody"><h1>Clustering/Session Replication HOW-TO</h1><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Important Note"><!--()--></a><a name="Important_Note"><strong>Important Note</strong></a></font></td></tr><tr><td><blockquote> | |
<p><b>You can also check the <a href="config/cluster.html">configuration reference documentation.</a></b> | |
</p> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Table of Contents"><!--()--></a><a name="Table_of_Contents"><strong>Table of Contents</strong></a></font></td></tr><tr><td><blockquote> | |
<ul><li><a href="#For_the_impatient">For the impatient</a></li><li><a href="#Security">Security</a></li><li><a href="#Cluster_Basics">Cluster Basics</a></li><li><a href="#Overview">Overview</a></li><li><a href="#Cluster_Information">Cluster Information</a></li><li><a href="#Bind_session_after_crash_to_failover_node">Bind session after crash to failover node</a></li><li><a href="#Configuration_Example">Configuration Example</a></li><li><a href="#Cluster_Architecture">Cluster Architecture</a></li><li><a href="#How_it_Works">How it Works</a></li><li><a href="#Monitoring_your_Cluster_with_JMX">Monitoring your Cluster with JMX</a></li><li><a href="#FAQ">FAQ</a></li></ul> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="For the impatient"><!--()--></a><a name="For_the_impatient"><strong>For the impatient</strong></a></font></td></tr><tr><td><blockquote> | |
<p> | |
Simply add | |
</p> | |
<div class="codeBox"><pre><code><Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/></code></pre></div> | |
<p> | |
to your <code><Engine></code> or your <code><Host></code> element to enable clustering. | |
</p> | |
<p> | |
Using the above configuration will enable all-to-all session replication | |
using the <code>DeltaManager</code> to replicate session deltas. By all-to-all we mean that the session gets replicated to all the other | |
nodes in the cluster. This works great for smaller cluster but we don't recommend it for larger clusters(a lot of Tomcat nodes). | |
Also when using the delta manager it will replicate to all nodes, even nodes that don't have the application deployed.<br> | |
To get around this problem, you'll want to use the BackupManager. This manager only replicates the session data to one backup | |
node, and only to nodes that have the application deployed. Downside of the BackupManager: not quite as battle tested as the delta manager. | |
</p> | |
<p> | |
Here are some of the important default values: | |
</p> | |
<ol> | |
<li>Multicast address is 228.0.0.4</li> | |
<li>Multicast port is 45564 (the port and the address together determine cluster membership.</li> | |
<li>The IP broadcasted is <code>java.net.InetAddress.getLocalHost().getHostAddress()</code> (make sure you don't broadcast 127.0.0.1, this is a common error)</li> | |
<li>The TCP port listening for replication messages is the first available server socket in range <code>4000-4100</code></li> | |
<li>Listener is configured <code>ClusterSessionListener</code></li> | |
<li>Two interceptors are configured <code>TcpFailureDetector</code> and <code>MessageDispatch15Interceptor</code></li> | |
</ol> | |
<p> | |
The following is the default cluster configuration: | |
</p> | |
<div class="codeBox"><pre><code> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" | |
channelSendOptions="8"> | |
<Manager className="org.apache.catalina.ha.session.DeltaManager" | |
expireSessionsOnShutdown="false" | |
notifyListenersOnReplication="true"/> | |
<Channel className="org.apache.catalina.tribes.group.GroupChannel"> | |
<Membership className="org.apache.catalina.tribes.membership.McastService" | |
address="228.0.0.4" | |
port="45564" | |
frequency="500" | |
dropTime="3000"/> | |
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" | |
address="auto" | |
port="4000" | |
autoBind="100" | |
selectorTimeout="5000" | |
maxThreads="6"/> | |
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> | |
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> | |
</Sender> | |
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> | |
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> | |
</Channel> | |
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" | |
filter=""/> | |
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> | |
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" | |
tempDir="/tmp/war-temp/" | |
deployDir="/tmp/war-deploy/" | |
watchDir="/tmp/war-listen/" | |
watchEnabled="false"/> | |
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"> | |
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"> | |
</Cluster></code></pre></div> | |
<p>Will cover this section in more detail later in this document.</p> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Security"><strong>Security</strong></a></font></td></tr><tr><td><blockquote> | |
<p>The cluster implementation is written on the basis that a secure, trusted | |
network is used for all of the cluster related network traffic. It is not safe | |
to run a cluster on a insecure, untrusted network.</p> | |
<p>There are many options for providing a secure, trusted network for use by a | |
Tomcat cluster. These include:</p> | |
<ul> | |
<li>private LAN</li> | |
<li>a Virtual Private Network (VPN)</li> | |
<li>IPSEC</li> | |
</ul> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Cluster Basics"><!--()--></a><a name="Cluster_Basics"><strong>Cluster Basics</strong></a></font></td></tr><tr><td><blockquote> | |
<p>To run session replication in your Tomcat 7.0 container, the following steps | |
should be completed:</p> | |
<ul> | |
<li>All your session attributes must implement <code>java.io.Serializable</code></li> | |
<li>Uncomment the <code>Cluster</code> element in server.xml</li> | |
<li>If you have defined custom cluster valves, make sure you have the <code>ReplicationValve</code> defined as well under the Cluster element in server.xml</li> | |
<li>If your Tomcat instances are running on the same machine, make sure the <code>Receiver.port</code> | |
attribute is unique for each instance, in most cases Tomcat is smart enough to resolve this on it's own by autodetecting available ports in the range 4000-4100</li> | |
<li>Make sure your <code>web.xml</code> has the | |
<code><distributable/></code> element</li> | |
<li>If you are using mod_jk, make sure that jvmRoute attribute is set at your Engine <code><Engine name="Catalina" jvmRoute="node01" ></code> | |
and that the jvmRoute attribute value matches your worker name in workers.properties</li> | |
<li>Make sure that all nodes have the same time and sync with NTP service!</li> | |
<li>Make sure that your loadbalancer is configured for sticky session mode.</li> | |
</ul> | |
<p>Load balancing can be achieved through many techniques, as seen in the | |
<a href="balancer-howto.html">Load Balancing</a> chapter.</p> | |
<p>Note: Remember that your session state is tracked by a cookie, so your URL must look the same from the out | |
side otherwise, a new session will be created.</p> | |
<p>Note: Clustering support currently requires the JDK version 1.5 or later.</p> | |
<p>The Cluster module uses the Tomcat JULI logging framework, so you can configure logging | |
through the regular logging.properties file. To track messages, you can enable logging on the key: <code>org.apache.catalina.tribes.MESSAGES</code></p> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Overview"><strong>Overview</strong></a></font></td></tr><tr><td><blockquote> | |
<p>To enable session replication in Tomcat, three different paths can be followed to achieve the exact same thing:</p> | |
<ol> | |
<li>Using session persistence, and saving the session to a shared file system (PersistenceManager + FileStore)</li> | |
<li>Using session persistence, and saving the session to a shared database (PersistenceManager + JDBCStore)</li> | |
<li>Using in-memory-replication, using the SimpleTcpCluster that ships with Tomcat (lib/catalina-tribes.jar + lib/catalina-ha.jar)</li> | |
</ol> | |
<p>In this release of session replication, Tomcat can perform an all-to-all replication of session state using the <code>DeltaManager</code> or | |
perform backup replication to only one node using the <code>BackupManager</code>. | |
The all-to-all replication is an algorithm that is only efficient when the clusters are small. For larger clusters, to use | |
a primary-secondary session replication where the session will only be stored at one backup server simply setup the BackupManager. <br> | |
Currently you can use the domain worker attribute (mod_jk > 1.2.8) to build cluster partitions | |
with the potential of having a more scalable cluster solution with the DeltaManager(you'll need to configure the domain interceptor for this). | |
In order to keep the network traffic down in an all-to-all environment, you can split your cluster | |
into smaller groups. This can be easily achieved by using different multicast addresses for the different groups. | |
A very simple setup would look like this: | |
</p> | |
<div class="codeBox"><pre><code> DNS Round Robin | |
| | |
Load Balancer | |
/ \ | |
Cluster1 Cluster2 | |
/ \ / \ | |
Tomcat1 Tomcat2 Tomcat3 Tomcat4</code></pre></div> | |
<p>What is important to mention here, is that session replication is only the beginning of clustering. | |
Another popular concept used to implement clusters is farming, i.e., you deploy your apps only to one | |
server, and the cluster will distribute the deployments across the entire cluster. | |
This is all capabilities that can go into with the FarmWarDeployer (s. cluster example at <code>server.xml</code>)</p> | |
<p>In the next section will go deeper into how session replication works and how to configure it.</p> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Cluster Information"><!--()--></a><a name="Cluster_Information"><strong>Cluster Information</strong></a></font></td></tr><tr><td><blockquote> | |
<p>Membership is established using multicast heartbeats. | |
Hence, if you wish to subdivide your clusters, you can do this by | |
changing the multicast IP address or port in the <code><Membership></code> element. | |
</p> | |
<p> | |
The heartbeat contains the IP address of the Tomcat node and the TCP port that | |
Tomcat listens to for replication traffic. All data communication happens over TCP. | |
</p> | |
<p> | |
The <code>ReplicationValve</code> is used to find out when the request has been completed and initiate the | |
replication, if any. Data is only replicated if the session has changed (by calling setAttribute or removeAttribute | |
on the session). | |
</p> | |
<p> | |
One of the most important performance considerations is the synchronous versus asynchronous replication. | |
In a synchronous replication mode the request doesn't return until the replicated session has been | |
sent over the wire and reinstantiated on all the other cluster nodes. | |
Synchronous vs. asynchronous is configured using the <code>channelSendOptions</code> | |
flag and is an integer value. The default value for the <code>SimpleTcpCluster/DeltaManager</code> combo is | |
8, which is asynchronous. You can read more on the <a href="tribes/introduction.html">send flag(overview)</a> or the | |
<a href="http://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/catalina/tribes/Channel.html">send flag(javadoc)</a>. | |
During async replication, the request is returned before the data has been replicated. async replication yields shorter | |
request times, and synchronous replication guarantees the session to be replicated before the request returns. | |
</p> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Bind session after crash to failover node"><!--()--></a><a name="Bind_session_after_crash_to_failover_node"><strong>Bind session after crash to failover node</strong></a></font></td></tr><tr><td><blockquote> | |
<p> | |
If you are using mod_jk and not using sticky sessions or for some reasons sticky session don't | |
work, or you are simply failing over, the session id will need to be modified as it previously contained | |
the worker id of the previous tomcat (as defined by jvmRoute in the Engine element). | |
To solve this, we will use the JvmRouteBinderValve. | |
</p> | |
<p> | |
The JvmRouteBinderValve rewrites the session id to ensure that the next request will remain sticky | |
(and not fall back to go to random nodes since the worker is no longer available) after a fail over. | |
The valve rewrites the JSESSIONID value in the cookie with the same name. | |
Not having this valve in place, will make it harder to ensure stickiness in case of a failure for the mod_jk module. | |
</p> | |
<p> | |
By default, if no valves are configured, the JvmRouteBinderValve is added on. | |
The cluster message listener called JvmRouteSessionIDBinderListener is also defined by default and is used to actually rewrite the | |
session id on the other nodes in the cluster once a fail over has occurred. | |
Remember, if you are adding your own valves or cluster listeners in server.xml then the defaults are no longer valid, | |
make sure that you add in all the appropriate valves and listeners as defined by the default. | |
</p> | |
<p> | |
<b>Hint:</b><br> | |
With attribute <i>sessionIdAttribute</i> you can change the request attribute name that included the old session id. | |
Default attribute name is <i>org.apache.catalina.ha.session.JvmRouteOrignalSessionID</i>. | |
</p> | |
<p> | |
<b>Trick:</b><br> | |
You can enable this mod_jk turnover mode via JMX before you drop a node to all backup nodes! | |
Set enable true on all JvmRouteBinderValve backups, disable worker at mod_jk | |
and then drop node and restart it! Then enable mod_jk Worker and disable JvmRouteBinderValves again. | |
This use case means that only requested session are migrated. | |
</p> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Configuration Example"><!--()--></a><a name="Configuration_Example"><strong>Configuration Example</strong></a></font></td></tr><tr><td><blockquote> | |
<div class="codeBox"><pre><code> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" | |
channelSendOptions="6"> | |
<Manager className="org.apache.catalina.ha.session.BackupManager" | |
expireSessionsOnShutdown="false" | |
notifyListenersOnReplication="true" | |
mapSendOptions="6"/> | |
<!-- | |
<Manager className="org.apache.catalina.ha.session.DeltaManager" | |
expireSessionsOnShutdown="false" | |
notifyListenersOnReplication="true"/> | |
--> | |
<Channel className="org.apache.catalina.tribes.group.GroupChannel"> | |
<Membership className="org.apache.catalina.tribes.membership.McastService" | |
address="228.0.0.4" | |
port="45564" | |
frequency="500" | |
dropTime="3000"/> | |
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" | |
address="auto" | |
port="5000" | |
selectorTimeout="100" | |
maxThreads="6"/> | |
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> | |
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> | |
</Sender> | |
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> | |
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> | |
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/> | |
</Channel> | |
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" | |
filter=".*\.gif|.*\.js|.*\.jpeg|.*\.jpg|.*\.png|.*\.htm|.*\.html|.*\.css|.*\.txt"/> | |
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" | |
tempDir="/tmp/war-temp/" | |
deployDir="/tmp/war-deploy/" | |
watchDir="/tmp/war-listen/" | |
watchEnabled="false"/> | |
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> | |
</Cluster></code></pre></div> | |
<p> | |
Break it down!! | |
</p> | |
<div class="codeBox"><pre><code> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" | |
channelSendOptions="6"></code></pre></div> | |
<p> | |
The main element, inside this element all cluster details can be configured. | |
The <code>channelSendOptions</code> is the flag that is attached to each message sent by the | |
SimpleTcpCluster class or any objects that are invoking the SimpleTcpCluster.send method. | |
The description of the send flags is available at <a href="http://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/catalina/tribes/Channel.html"> | |
our javadoc site</a> | |
The <code>DeltaManager</code> sends information using the SimpleTcpCluster.send method, while the backup manager | |
sends it itself directly through the channel. | |
<br>For more info, Please visit the <a href="config/cluster.html">reference documentation</a> | |
</p> | |
<div class="codeBox"><pre><code> <Manager className="org.apache.catalina.ha.session.BackupManager" | |
expireSessionsOnShutdown="false" | |
notifyListenersOnReplication="true" | |
mapSendOptions="6"/> | |
<!-- | |
<Manager className="org.apache.catalina.ha.session.DeltaManager" | |
expireSessionsOnShutdown="false" | |
notifyListenersOnReplication="true"/> | |
--></code></pre></div> | |
<p> | |
This is a template for the manager configuration that will be used if no manager is defined in the <Context> | |
element. In Tomcat 5.x each webapp marked distributable had to use the same manager, this is no longer the case | |
since Tomcat you can define a manager class for each webapp, so that you can mix managers in your cluster. | |
Obviously the managers on one node's application has to correspond with the same manager on the same application on the other node. | |
If no manager has been specified for the webapp, and the webapp is marked <distributable/> Tomcat will take this manager configuration | |
and create a manager instance cloning this configuration. | |
<br>For more info, Please visit the <a href="config/cluster-manager.html">reference documentation</a> | |
</p> | |
<div class="codeBox"><pre><code> <Channel className="org.apache.catalina.tribes.group.GroupChannel"></code></pre></div> | |
<p> | |
The channel element is <a href="tribes/introduction.html">Tribes</a>, the group communication framework | |
used inside Tomcat. This element encapsulates everything that has to do with communication and membership logic. | |
<br>For more info, Please visit the <a href="config/cluster-channel.html">reference documentation</a> | |
</p> | |
<div class="codeBox"><pre><code> <Membership className="org.apache.catalina.tribes.membership.McastService" | |
address="228.0.0.4" | |
port="45564" | |
frequency="500" | |
dropTime="3000"/></code></pre></div> | |
<p> | |
Membership is done using multicasting. Please note that Tribes also supports static memberships using the | |
<code>StaticMembershipInterceptor</code> if you want to extend your membership to points beyond multicasting. | |
The address attribute is the multicast address used and the port is the multicast port. These two together | |
create the cluster separation. If you want a QA cluster and a production cluster, the easiest config is to | |
have the QA cluster be on a separate multicast address/port combination than the production cluster.<br> | |
The membership component broadcasts TCP address/port of itself to the other nodes so that communication between | |
nodes can be done over TCP. Please note that the address being broadcasted is the one of the | |
<code>Receiver.address</code> attribute. | |
<br>For more info, Please visit the <a href="config/cluster-membership.html">reference documentation</a> | |
</p> | |
<div class="codeBox"><pre><code> <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" | |
address="auto" | |
port="5000" | |
selectorTimeout="100" | |
maxThreads="6"/></code></pre></div> | |
<p> | |
In tribes the logic of sending and receiving data has been broken into two functional components. The Receiver, as the name suggests | |
is responsible for receiving messages. Since the Tribes stack is thread less, (a popular improvement now adopted by other frameworks as well), | |
there is a thread pool in this component that has a maxThreads and minThreads setting.<br> | |
The address attribute is the host address that will be broadcasted by the membership component to the other nodes. | |
<br>For more info, Please visit the <a href="config/cluster-receiver.html">reference documentation</a> | |
</p> | |
<div class="codeBox"><pre><code> <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> | |
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> | |
</Sender></code></pre></div> | |
<p> | |
The sender component, as the name indicates is responsible for sending messages to other nodes. | |
The sender has a shell component, the <code>ReplicationTransmitter</code> but the real stuff done is done in the | |
sub component, <code>Transport</code>. | |
Tribes support having a pool of senders, so that messages can be sent in parallel and if using the NIO sender, | |
you can send messages concurrently as well.<br> | |
Concurrently means one message to multiple senders at the same time and Parallel means multiple messages to multiple senders | |
at the same time. | |
<br>For more info, Please visit the <a href="config/cluster-sender.html">reference documentation</a> | |
</p> | |
<div class="codeBox"><pre><code> <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> | |
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> | |
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/> | |
</Channel></code></pre></div> | |
<p> | |
Tribes uses a stack to send messages through. Each element in the stack is called an interceptor, and works much like the valves do | |
in the Tomcat servlet container. | |
Using interceptors, logic can be broken into more manageable pieces of code. The interceptors configured above are:<br> | |
TcpFailureDetector - verifies crashed members through TCP, if multicast packets get dropped, this interceptor protects against false positives, | |
ie the node marked as crashed even though it still is alive and running.<br> | |
MessageDispatch15Interceptor - dispatches messages to a thread (thread pool) to send message asynchronously.<br> | |
ThroughputInterceptor - prints out simple stats on message traffic.<br> | |
Please note that the order of interceptors is important. The way they are defined in server.xml is the way they are represented in the | |
channel stack. Think of it as a linked list, with the head being the first most interceptor and the tail the last. | |
<br>For more info, Please visit the <a href="config/cluster-interceptor.html">reference documentation</a> | |
</p> | |
<div class="codeBox"><pre><code> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" | |
filter=".*\.gif|.*\.js|.*\.jpeg|.*\.jpg|.*\.png|.*\.htm|.*\.html|.*\.css|.*\.txt"/></code></pre></div> | |
<p> | |
The cluster uses valves to track requests to web applications, we've mentioned the ReplicationValve and the JvmRouteBinderValve above. | |
The <Cluster> element itself is not part of the pipeline in Tomcat, instead the cluster adds the valve to its parent container. | |
If the <Cluster> elements is configured in the <Engine> element, the valves get added to the engine and so on. | |
<br>For more info, Please visit the <a href="config/cluster-valve.html">reference documentation</a> | |
</p> | |
<div class="codeBox"><pre><code> <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" | |
tempDir="/tmp/war-temp/" | |
deployDir="/tmp/war-deploy/" | |
watchDir="/tmp/war-listen/" | |
watchEnabled="false"/></code></pre></div> | |
<p> | |
The default tomcat cluster supports farmed deployment, ie, the cluster can deploy and undeploy applications on the other nodes. | |
The state of this component is currently in flux but will be addressed soon. There was a change in the deployment algorithm | |
between Tomcat 5.0 and 5.5 and at that point, the logic of this component changed to where the deploy dir has to match the | |
webapps directory. | |
<br>For more info, Please visit the <a href="config/cluster-deployer.html">reference documentation</a> | |
</p> | |
<div class="codeBox"><pre><code> <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> | |
</Cluster></code></pre></div> | |
<p> | |
Since the SimpleTcpCluster itself is a sender and receiver of the Channel object, components can register themselves as listeners to | |
the SimpleTcpCluster. The listener above <code>ClusterSessionListener</code> listens for DeltaManager replication messages | |
and applies the deltas to the manager that in turn applies it to the session. | |
<br>For more info, Please visit the <a href="config/cluster-listener.html">reference documentation</a> | |
</p> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Cluster Architecture"><!--()--></a><a name="Cluster_Architecture"><strong>Cluster Architecture</strong></a></font></td></tr><tr><td><blockquote> | |
<p><b>Component Levels:</b></p> | |
<div class="codeBox"><pre><code> Server | |
| | |
Service | |
| | |
Engine | |
| \ | |
| --- Cluster --* | |
| | |
Host | |
| | |
------ | |
/ \ | |
Cluster Context(1-N) | |
| \ | |
| -- Manager | |
| \ | |
| -- DeltaManager | |
| -- BackupManager | |
| | |
--------------------------- | |
| \ | |
Channel \ | |
----------------------------- \ | |
| \ | |
Interceptor_1 .. \ | |
| \ | |
Interceptor_N \ | |
----------------------------- \ | |
| | | \ | |
Receiver Sender Membership \ | |
-- Valve | |
| \ | |
| -- ReplicationValve | |
| -- JvmRouteBinderValve | |
| | |
-- LifecycleListener | |
| | |
-- ClusterListener | |
| \ | |
| -- ClusterSessionListener | |
| -- JvmRouteSessionIDBinderListener | |
| | |
-- Deployer | |
\ | |
-- FarmWarDeployer | |
</code></pre></div> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="How it Works"><!--()--></a><a name="How_it_Works"><strong>How it Works</strong></a></font></td></tr><tr><td><blockquote> | |
<p>To make it easy to understand how clustering works, We are gonna take you through a series of scenarios. | |
In the scenario we only plan to use two tomcat instances <code>TomcatA</code> and <code>TomcatB</code>. | |
We will cover the following sequence of events:</p> | |
<ol> | |
<li><code>TomcatA</code> starts up</li> | |
<li><code>TomcatB</code> starts up (Wait that TomcatA start is complete)</li> | |
<li><code>TomcatA</code> receives a request, a session <code>S1</code> is created.</li> | |
<li><code>TomcatA</code> crashes</li> | |
<li><code>TomcatB</code> receives a request for session <code>S1</code></li> | |
<li><code>TomcatA</code> starts up</li> | |
<li><code>TomcatA</code> receives a request, invalidate is called on the session (<code>S1</code>)</li> | |
<li><code>TomcatB</code> receives a request, for a new session (<code>S2</code>)</li> | |
<li><code>TomcatA</code> The session <code>S2</code> expires due to inactivity.</li> | |
</ol> | |
<p>Ok, now that we have a good sequence, we will take you through exactly what happens in the session replication code</p> | |
<ol> | |
<li><b><code>TomcatA</code> starts up</b> | |
<p> | |
Tomcat starts up using the standard start up sequence. When the Host object is created, a cluster object is associated with it. | |
When the contexts are parsed, if the distributable element is in place in web.xml | |
Tomcat asks the Cluster class (in this case <code>SimpleTcpCluster</code>) to create a manager | |
for the replicated context. So with clustering enabled, distributable set in web.xml | |
Tomcat will create a <code>DeltaManager</code> for that context instead of a <code>StandardManager</code>. | |
The cluster class will start up a membership service (multicast) and a replication service (tcp unicast). | |
More on the architecture further down in this document. | |
</p> | |
</li> | |
<li><b><code>TomcatB</code> starts up</b> | |
<p> | |
When TomcatB starts up, it follows the same sequence as TomcatA did with one exception. | |
The cluster is started and will establish a membership (TomcatA,TomcatB). | |
TomcatB will now request the session state from a server that already exists in the cluster, | |
in this case TomcatA. TomcatA responds to the request, and before TomcatB starts listening | |
for HTTP requests, the state has been transferred from TomcatA to TomcatB. | |
In case TomcatA doesn't respond, TomcatB will time out after 60 seconds, and issue a log | |
entry. The session state gets transferred for each web application that has distributable in | |
its web.xml. Note: To use session replication efficiently, all your tomcat instances should be | |
configured the same. | |
</p> | |
</li> | |
<li><B><code>TomcatA</code> receives a request, a session <code>S1</code> is created.</B> | |
<p> | |
The request coming in to TomcatA is treated exactly the same way as without session replication. | |
The action happens when the request is completed, the <code>ReplicationValve</code> will intercept | |
the request before the response is returned to the user. | |
At this point it finds that the session has been modified, and it uses TCP to replicate the | |
session to TomcatB. Once the serialized data has been handed off to the operating systems TCP logic, | |
the request returns to the user, back through the valve pipeline. | |
For each request the entire session is replicated, this allows code that modifies attributes | |
in the session without calling setAttribute or removeAttribute to be replicated. | |
a useDirtyFlag configuration parameter can be used to optimize the number of times | |
a session is replicated. | |
</p> | |
</li> | |
<li><b><code>TomcatA</code> crashes</b> | |
<p> | |
When TomcatA crashes, TomcatB receives a notification that TomcatA has dropped out | |
of the cluster. TomcatB removes TomcatA from its membership list, and TomcatA will no longer | |
be notified of any changes that occurs in TomcatB. | |
The load balancer will redirect the requests from TomcatA to TomcatB and all the sessions | |
are current. | |
</p> | |
</li> | |
<li><b><code>TomcatB</code> receives a request for session <code>S1</code></b> | |
<p>Nothing exciting, TomcatB will process the request as any other request. | |
</p> | |
</li> | |
<li><b><code>TomcatA</code> starts up</b> | |
<p>Upon start up, before TomcatA starts taking new request and making itself | |
available to it will follow the start up sequence described above 1) 2). | |
It will join the cluster, contact TomcatB for the current state of all the sessions. | |
And once it receives the session state, it finishes loading and opens its HTTP/mod_jk ports. | |
So no requests will make it to TomcatA until it has received the session state from TomcatB. | |
</p> | |
</li> | |
<li><b><code>TomcatA</code> receives a request, invalidate is called on the session (<code>S1</code>)</b> | |
<p>The invalidate call is intercepted, and the session is queued with invalidated sessions. | |
When the request is complete, instead of sending out the session that has changed, it sends out | |
an "expire" message to TomcatB and TomcatB will invalidate the session as well. | |
</p> | |
</li> | |
<li><b><code>TomcatB</code> receives a request, for a new session (<code>S2</code>)</b> | |
<p>Same scenario as in step 3) | |
</p> | |
</li> | |
<li><code>TomcatA</code> The session <code>S2</code> expires due to inactivity. | |
<p>The invalidate call is intercepted the same was as when a session is invalidated by the user, | |
and the session is queued with invalidated sessions. | |
At this point, the invalidated session will not be replicated across until | |
another request comes through the system and checks the invalid queue. | |
</p> | |
</li> | |
</ol> | |
<p>Phuuuhh! :)</p> | |
<p><b>Membership</b> | |
Clustering membership is established using very simple multicast pings. | |
Each Tomcat instance will periodically send out a multicast ping, | |
in the ping message the instance will broad cast its IP and TCP listen port | |
for replication. | |
If an instance has not received such a ping within a given timeframe, the | |
member is considered dead. Very simple, and very effective! | |
Of course, you need to enable multicasting on your system. | |
</p> | |
<p><b>TCP Replication</b> | |
Once a multicast ping has been received, the member is added to the cluster | |
Upon the next replication request, the sending instance will use the host and | |
port info and establish a TCP socket. Using this socket it sends over the serialized data. | |
The reason I choose TCP sockets is because it has built in flow control and guaranteed delivery. | |
So I know, when I send some data, it will make it there :) | |
</p> | |
<p><b>Distributed locking and pages using frames</b> | |
Tomcat does not keep session instances in sync across the cluster. | |
The implementation of such logic would be to much overhead and cause all | |
kinds of problems. If your client accesses the same session | |
simultaneously using multiple requests, then the last request | |
will override the other sessions in the cluster. | |
</p> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Monitoring your Cluster with JMX"><!--()--></a><a name="Monitoring_your_Cluster_with_JMX"><strong>Monitoring your Cluster with JMX</strong></a></font></td></tr><tr><td><blockquote> | |
<p>Monitoring is a very important question when you use a cluster. Some of the cluster objects are JMX MBeans </p> | |
<p>Add the following parameter to your startup script with Java 5:</p> | |
<div class="codeBox"><pre><code>set CATALINA_OPTS=\ | |
-Dcom.sun.management.jmxremote \ | |
-Dcom.sun.management.jmxremote.port=%my.jmx.port% \ | |
-Dcom.sun.management.jmxremote.ssl=false \ | |
-Dcom.sun.management.jmxremote.authenticate=false</code></pre></div> | |
<p> | |
List of Cluster Mbeans | |
</p> | |
<table class="defaultTable"> | |
<tr> | |
<th>Name</th> | |
<th>Description</th> | |
<th>MBean ObjectName - Engine</th> | |
<th>MBean ObjectName - Host</th> | |
</tr> | |
<tr> | |
<td>Cluster</td> | |
<td>The complete cluster element</td> | |
<td><code>type=Cluster</code></td> | |
<td><code>type=Cluster,host=${HOST}</code></td> | |
</tr> | |
<tr> | |
<td>DeltaManager</td> | |
<td>This manager control the sessions and handle session replication </td> | |
<td><code>type=Manager,context=${APP.CONTEXT.PATH}, host=${HOST}</code></td> | |
<td><code>type=Manager,context=${APP.CONTEXT.PATH}, host=${HOST}</code></td> | |
</tr> | |
<tr> | |
<td>FarmWarDeployer</td> | |
<td>Manages the process of deploying an application to all nodes in the cluster</td> | |
<td>Not supported</td> | |
<td><code>type=Cluster, host=${HOST}, component=deployer</code></td> | |
</tr> | |
<tr> | |
<td>Member</td> | |
<td>Represents a node in the cluster</td> | |
<td>type=Cluster, component=member, name=${NODE_NAME}</td> | |
<td><code>type=Cluster, host=${HOST}, component=member, name=${NODE_NAME}</code></td> | |
</tr> | |
<tr> | |
<td>ReplicationValve</td> | |
<td>This valve control the replication to the backup nodes</td> | |
<td><code>type=Valve,name=ReplicationValve</code></td> | |
<td><code>type=Valve,name=ReplicationValve,host=${HOST}</code></td> | |
</tr> | |
<tr> | |
<td>JvmRouteBinderValve</td> | |
<td>This is a cluster fallback valve to change the Session ID to the current tomcat jvmroute.</td> | |
<td><code>type=Valve,name=JvmRouteBinderValve, | |
context=${APP.CONTEXT.PATH}</code></td> | |
<td><code>type=Valve,name=JvmRouteBinderValve,host=${HOST}, | |
context=${APP.CONTEXT.PATH}</code></td> | |
</tr> | |
</table> | |
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="FAQ"><strong>FAQ</strong></a></font></td></tr><tr><td><blockquote> | |
<p>Please see <a href="http://wiki.apache.org/tomcat/FAQ/Clustering">the clustering section of the FAQ</a>.</p> | |
</blockquote></td></tr></table></td></tr><tr class="noPrint"><td width="20%" valign="top" nowrap class="noPrint"></td><td width="80%" valign="top" align="left"><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="comments_section" id="comments_section"><strong>Comments</strong></a></font></td></tr><tr><td><blockquote><p class="notice"><strong>Notice: </strong>This comments section collects your suggestions | |
on improving documentation for Apache Tomcat.<br><br> | |
If you have trouble and need help, read | |
<a href="http://tomcat.apache.org/findhelp.html">Find Help</a> page | |
and ask your question on the tomcat-users | |
<a href="http://tomcat.apache.org/lists.html">mailing list</a>. | |
Do not ask such questions here. This is not a Q&A section.<br><br> | |
The Apache Comments System is explained <a href="./comments.html">here</a>. | |
Comments may be removed by our moderators if they are either | |
implemented or considered invalid/off-topic.</p><script type="text/javascript"><!--//--><![CDATA[//><!-- | |
var comments_shortname = 'tomcat'; | |
var comments_identifier = 'http://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html'; | |
(function(w, d) { | |
if (w.location.hostname.toLowerCase() == "tomcat.apache.org") { | |
d.write('<div id="comments_thread"><\/div>'); | |
var s = d.createElement('script'); | |
s.type = 'text/javascript'; | |
s.async = true; | |
s.src = 'https://comments.apache.org/show_comments.lua?site=' + comments_shortname + '&page=' + comments_identifier; | |
(d.getElementsByTagName('head')[0] || d.getElementsByTagName('body')[0]).appendChild(s); | |
} | |
else { | |
d.write('<div id="comments_thread"><strong>Comments are disabled for this page at the moment.<\/strong><\/div>'); | |
} | |
})(window, document); | |
//--><!]]></script></blockquote></td></tr></table></td></tr><!--FOOTER SEPARATOR--><tr><td colspan="2"><hr noshade size="1"></td></tr><!--PAGE FOOTER--><tr><td colspan="2"><div align="center"><font color="#525D76" size="-1"><em> | |
Copyright © 1999-2017, Apache Software Foundation | |
</em></font></div></td></tr></table></body></html> |