643 lines
59 KiB
HTML
643 lines
59 KiB
HTML
|
<html><head><META http-equiv="Content-Type" content="text/html; charset=iso-8859-1"><title>Apache Tomcat 6.0 - Clustering/Session Replication HOW-TO</title><meta name="author" value="Filip Hanik"><meta name="email" value="fhanik@apache.org"><meta name="author" value="Peter Rossbach"><meta name="email" value="pero@apache.org"></head><body bgcolor="#ffffff" text="#000000" link="#525D76" alink="#525D76" vlink="#525D76"><table border="0" width="100%" cellspacing="0"><!--PAGE HEADER--><tr><td><!--PROJECT LOGO--><a href="http://tomcat.apache.org/"><img src="./../images/tomcat.gif" align="right" alt="
|
||
|
The Apache Tomcat Servlet/JSP Container
|
||
|
" border="0"></a></td><td><font face="arial,helvetica,sanserif"><h1>Apache Tomcat 6.0</h1></font></td><td><!--APACHE LOGO--><a href="http://www.apache.org/"><img src="./../images/asf-logo.gif" align="right" alt="Apache Logo" border="0"></a></td></tr></table><table border="0" width="100%" cellspacing="4"><!--HEADER SEPARATOR--><tr><td colspan="2"><hr noshade="noshade" size="1"></td></tr><tr><!--RIGHT SIDE MAIN BODY--><td width="80%" valign="top" align="left"><table border="0" width="100%" cellspacing="4"><tr><td align="left" valign="top"><h1>Apache Tomcat 6.0</h1><h2>Clustering/Session Replication HOW-TO</h2></td><td align="right" valign="top" nowrap="true"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Important Note"><strong>Important Note</strong></a></font></td></tr><tr><td><blockquote>
|
||
|
<p><b>You can also check the <a href="../config/cluster.html">configuration reference documentation.</a></b>
|
||
|
</p>
|
||
|
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="For the impatient"><strong>For the impatient</strong></a></font></td></tr><tr><td><blockquote>
|
||
|
<p>
|
||
|
Simply add <div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre><Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/></pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
to your <code><Engine></code> or your <code><Host></code> element to enable clustering.
|
||
|
</p>
|
||
|
<p>
|
||
|
Using the above configuration will enable all-to-all session replication
|
||
|
using the <code>DeltaManager</code> to replicate session deltas. By all-to-all we mean that the session gets replicated to all the other
|
||
|
nodes in the cluster. This works great for smaller cluster but we don't recommend it for larger clusters(a lot of tomcat nodes).
|
||
|
Also when using the delta manager it will replicate to all nodes, even nodes that don't have the application deployed.<br>
|
||
|
To get around this problem, you'll want to use the BackupManager. This manager only replicates the session data to one backup
|
||
|
node, and only to nodes that have the application deployed. Downside of the BackupManager: not quite as battle tested as the delta manager.
|
||
|
<br>
|
||
|
Here are some of the important default values:<br>
|
||
|
1. Multicast address is 228.0.0.4<br>
|
||
|
2. Multicast port is 45564 (the port and the address together determine cluster membership.<br>
|
||
|
3. The IP broadcasted is <code>java.net.InetAddress.getLocalHost().getHostAddress()</code> (make sure you don't broadcast 127.0.0.1, this is a common error)<br>
|
||
|
4. The TCP port listening for replication messages is the first available server socket in range <code>4000-4100</code><br>
|
||
|
5. Two listeners are configured <code>ClusterSessionListener</code> and <code>JvmRouteSessionIDBinderListener</code><br>
|
||
|
6. Two interceptors are configured <code>TcpFailureDetector</code> and <code>MessageDispatch15Interceptor</code><br>
|
||
|
The following is the default cluster configuration:<br>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
|
||
|
channelSendOptions="8">
|
||
|
|
||
|
<Manager className="org.apache.catalina.ha.session.DeltaManager"
|
||
|
expireSessionsOnShutdown="false"
|
||
|
notifyListenersOnReplication="true"/>
|
||
|
|
||
|
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
|
||
|
<Membership className="org.apache.catalina.tribes.membership.McastService"
|
||
|
address="228.0.0.4"
|
||
|
port="45564"
|
||
|
frequency="500"
|
||
|
dropTime="3000"/>
|
||
|
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
|
||
|
address="auto"
|
||
|
port="4000"
|
||
|
autoBind="100"
|
||
|
selectorTimeout="5000"
|
||
|
maxThreads="6"/>
|
||
|
|
||
|
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
|
||
|
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
|
||
|
</Sender>
|
||
|
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
|
||
|
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
|
||
|
</Channel>
|
||
|
|
||
|
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
|
||
|
filter=""/>
|
||
|
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
|
||
|
|
||
|
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
|
||
|
tempDir="/tmp/war-temp/"
|
||
|
deployDir="/tmp/war-deploy/"
|
||
|
watchDir="/tmp/war-listen/"
|
||
|
watchEnabled="false"/>
|
||
|
|
||
|
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
|
||
|
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
|
||
|
</Cluster>
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
</p>
|
||
|
<p>Will cover this section in more detail later in this document.</p>
|
||
|
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Cluster Basics"><strong>Cluster Basics</strong></a></font></td></tr><tr><td><blockquote>
|
||
|
|
||
|
<p>To run session replication in your Tomcat 6.0 container, the following steps
|
||
|
should be completed:</p>
|
||
|
<ul>
|
||
|
<li>All your session attributes must implement <code>java.io.Serializable</code></li>
|
||
|
<li>Uncomment the <code>Cluster</code> element in server.xml</li>
|
||
|
<li>If you have defined custom cluster valves, make sure you have the <code>ReplicationValve</code> defined as well under the Cluster element in server.xml</li>
|
||
|
<li>If your Tomcat instances are running on the same machine, make sure the <code>tcpListenPort</code>
|
||
|
attribute is unique for each instance, in most cases Tomcat is smart enough to resolve this on it's own by autodetecting available ports in the range 4000-4100</li>
|
||
|
<li>Make sure your <code>web.xml</code> has the <code><distributable/></code> element
|
||
|
or set at your <code><Context distributable="true" /></code></li>
|
||
|
<li>If you are using mod_jk, make sure that jvmRoute attribute is set at your Engine <code><Engine name="Catalina" jvmRoute="node01" ></code>
|
||
|
and that the jvmRoute attribute value matches your worker name in workers.properties</li>
|
||
|
<li>Make sure that all nodes have the same time and sync with NTP service!</li>
|
||
|
<li>Make sure that your loadbalancer is configured for sticky session mode.</li>
|
||
|
</ul>
|
||
|
<p>Load balancing can be achieved through many techniques, as seen in the
|
||
|
<a href="balancer-howto.html">Load Balancing</a> chapter.</p>
|
||
|
<p>Note: Remember that your session state is tracked by a cookie, so your URL must look the same from the out
|
||
|
side otherwise, a new session will be created.</p>
|
||
|
<p>Note: Clustering support currently requires the JDK version 1.5 or later.</p>
|
||
|
<p>The Cluster module uses the Tomcat JULI logging framework, so you can configure logging
|
||
|
through the regular logging.properties file. To track messages, you can enable logging on the key:<code>org.apache.catalina.tribes.MESSAGES</code></p>
|
||
|
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Overview"><strong>Overview</strong></a></font></td></tr><tr><td><blockquote>
|
||
|
|
||
|
<p>To enable session replication in Tomcat, three different paths can be followed to achieve the exact same thing:</p>
|
||
|
<ol>
|
||
|
<li>Using session persistence, and saving the session to a shared file system (PersistenceManager + FileStore)</li>
|
||
|
<li>Using session persistence, and saving the session to a shared database (PersistenceManager + JDBCStore)</li>
|
||
|
<li>Using in-memory-replication, using the SimpleTcpCluster that ships with Tomcat 6 (lib/catalina-tribes.jar + lib/catalina-ha.jar)</li>
|
||
|
</ol>
|
||
|
|
||
|
<p>In this release of session replication, Tomcat can perform an all-to-all replication of session state using the <code>DeltaManager</code> or
|
||
|
perform backup replication to only one node using the <code>BackupManager</code>.
|
||
|
The all-to-all replication is an algorithm that is only efficient when the clusters are small. For larger clusters, to use
|
||
|
a primary-secondary session replication where the session will only be stored at one backup server simply setup the BackupManager. <br>
|
||
|
Currently you can use the domain worker attribute (mod_jk > 1.2.8) to build cluster partitions
|
||
|
with the potential of having a more scaleable cluster solution with the DeltaManager(you'll need to configure the domain interceptor for this).
|
||
|
In order to keep the network traffic down in an all-to-all environment, you can split your cluster
|
||
|
into smaller groups. This can be easily achieved by using different multicast addresses for the different groups.
|
||
|
A very simple setup would look like this:
|
||
|
</p>
|
||
|
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
DNS Round Robin
|
||
|
|
|
||
|
Load Balancer
|
||
|
/ \
|
||
|
Cluster1 Cluster2
|
||
|
/ \ / \
|
||
|
Tomcat1 Tomcat2 Tomcat3 Tomcat4
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
|
||
|
<p>What is important to mention here, is that session replication is only the beginning of clustering.
|
||
|
Another popular concept used to implement clusters is farming, ie, you deploy your apps only to one
|
||
|
server, and the cluster will distribute the deployments across the entire cluster.
|
||
|
This is all capabilities that can go into with the FarmWarDeployer (s. cluster example at <code>server.xml</code>)</p>
|
||
|
<p>In the next section will go deeper into how session replication works and how to configure it.</p>
|
||
|
|
||
|
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Cluster Information"><strong>Cluster Information</strong></a></font></td></tr><tr><td><blockquote>
|
||
|
<p>Membership is established using multicast heartbeats.
|
||
|
Hence, if you wish to subdivide your clusters, you can do this by
|
||
|
changing the multicast IP address or port in the <code><Membership></code> element.
|
||
|
</p>
|
||
|
<p>
|
||
|
The heartbeat contains the IP address of the Tomcat node and the TCP port that
|
||
|
Tomcat listens to for replication traffic. All data communication happens over TCP.
|
||
|
</p>
|
||
|
<p>
|
||
|
The <code>ReplicationValve</code> is used to find out when the request has been completed and initiate the
|
||
|
replication, if any. Data is only replicated if the session has changed (by calling setAttribute or removeAttribute
|
||
|
on the session).
|
||
|
</p>
|
||
|
<p>
|
||
|
One of the most important performance considerations is the synchronous versus asynchronous replication.
|
||
|
In a synchronous replication mode the request doesn't return until the replicated session has been
|
||
|
sent over the wire and reinstantiated on all the other cluster nodes.
|
||
|
Synchronous vs asynchronous is configured using the <code>channelSendOptions</code>
|
||
|
flag and is an integer value. The default value for the <code>SimpleTcpCluster/DeltaManager</code> combo is
|
||
|
8, which is asynchronous. You can read more on the <a href="../tribes/introduction.html">send flag(overview)</a> or the
|
||
|
<a href="http://tomcat.apache.org/tomcat-6.0-doc/api/org/apache/catalina/tribes/Channel.html">send flag(javadoc)</a>.
|
||
|
During async replication, the request is returned before the data has been replicated. async replication yields shorter
|
||
|
request times, and synchronous replication guarantees the session to be replicated before the request returns.
|
||
|
</p>
|
||
|
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Bind session after crash to failover node"><strong>Bind session after crash to failover node</strong></a></font></td></tr><tr><td><blockquote>
|
||
|
<p>
|
||
|
If you are using mod_jk and not using sticky sessions or for some reasons sticky session don't
|
||
|
work, or you are simply failing over, the session id will need to be modified as it previously contained
|
||
|
the worker id of the previous tomcat (as defined by jvmRoute in the Engine element).
|
||
|
To solve this, we will use the JvmRouteBinderValve.
|
||
|
</p>
|
||
|
<p>
|
||
|
The JvmRouteBinderValve rewrites the session id to ensure that the next request will remain sticky
|
||
|
(and not fall back to go to random nodes since the worker is no longer available) after a fail over.
|
||
|
The valve rewrites the JSESSIONID value in the cookie with the same name.
|
||
|
Not having this valve in place, will make it harder to ensure stickyness in case of a failure for the mod_jk module.
|
||
|
</p>
|
||
|
<p>
|
||
|
By default, if no valves are configured, the JvmRouteBinderValve is added on.
|
||
|
The cluster message listener called JvmRouteSessionIDBinderListener is also defined by default and is used to actually rewrite the
|
||
|
session id on the other nodes in the cluster once a fail over has occurred.
|
||
|
Remember, if you are adding your own valves or cluster listeners in server.xml then the defaults are no longer valid,
|
||
|
make sure that you add in all the appropriate valves and listeners as defined by the default.
|
||
|
</p>
|
||
|
<p>
|
||
|
<b>Hint:</b><br>
|
||
|
With attribute <i>sessionIdAttribute</i> you can change the request attribute name that included the old session id.
|
||
|
Default attribuite name is <i>org.apache.catalina.cluster.session.JvmRouteOrignalSessionID</i>.
|
||
|
</p>
|
||
|
<p>
|
||
|
<b>Trick:</b><br>
|
||
|
You can enable this mod_jk turnover mode via JMX before you drop a node to all backup nodes!
|
||
|
Set enable true on all JvmRouteBinderValve backups, disable worker at mod_jk
|
||
|
and then drop node and restart it! Then enable mod_jk Worker and disable JvmRouteBinderValves again.
|
||
|
This use case means that only requested session are migrated.
|
||
|
</p>
|
||
|
|
||
|
|
||
|
|
||
|
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Configuration Example"><strong>Configuration Example</strong></a></font></td></tr><tr><td><blockquote>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
|
||
|
channelSendOptions="6">
|
||
|
|
||
|
<Manager className="org.apache.catalina.ha.session.BackupManager"
|
||
|
expireSessionsOnShutdown="false"
|
||
|
notifyListenersOnReplication="true"
|
||
|
mapSendOptions="6"/>
|
||
|
<!--
|
||
|
<Manager className="org.apache.catalina.ha.session.DeltaManager"
|
||
|
expireSessionsOnShutdown="false"
|
||
|
notifyListenersOnReplication="true"/>
|
||
|
-->
|
||
|
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
|
||
|
<Membership className="org.apache.catalina.tribes.membership.McastService"
|
||
|
address="228.0.0.4"
|
||
|
port="45564"
|
||
|
frequency="500"
|
||
|
dropTime="3000"/>
|
||
|
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
|
||
|
address="auto"
|
||
|
port="5000"
|
||
|
selectorTimeout="100"
|
||
|
maxThreads="6"/>
|
||
|
|
||
|
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
|
||
|
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
|
||
|
</Sender>
|
||
|
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
|
||
|
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
|
||
|
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
|
||
|
</Channel>
|
||
|
|
||
|
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
|
||
|
filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
|
||
|
|
||
|
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
|
||
|
tempDir="/tmp/war-temp/"
|
||
|
deployDir="/tmp/war-deploy/"
|
||
|
watchDir="/tmp/war-listen/"
|
||
|
watchEnabled="false"/>
|
||
|
|
||
|
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
|
||
|
</Cluster>
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
<p>
|
||
|
Break it down!!
|
||
|
</p>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
|
||
|
channelSendOptions="6">
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
<p>
|
||
|
The main element, inside this element all cluster details can be configured.
|
||
|
The <code>channelSendOptions</code> is the flag that is attached to each message sent by the
|
||
|
SimpleTcpCluster class or any objects that are invoking the SimpleTcpCluster.send method.
|
||
|
The description of the send flags is available at <a href="http://tomcat.apache.org/tomcat-6.0-doc/api/org/apache/catalina/tribes/Channel.html">
|
||
|
our javadoc site</a>
|
||
|
The <code>DeltaManager</code> sends information using the SimpleTcpCluster.send method, while the backup manager
|
||
|
sends it itself directly through the channel.
|
||
|
<br>For more info, Please visit the <a href="../config/cluster.html">reference documentation</a>
|
||
|
</p>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<Manager className="org.apache.catalina.ha.session.BackupManager"
|
||
|
expireSessionsOnShutdown="false"
|
||
|
notifyListenersOnReplication="true"
|
||
|
mapSendOptions="6"/>
|
||
|
<!--
|
||
|
<Manager className="org.apache.catalina.ha.session.DeltaManager"
|
||
|
expireSessionsOnShutdown="false"
|
||
|
notifyListenersOnReplication="true"/>
|
||
|
-->
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
<p>
|
||
|
This is a template for the manager configuration that will be used if no manager is defined in the <Context>
|
||
|
element. In Tomcat 5.x each webapp marked distributable had to use the same manager, this is no longer the case
|
||
|
since Tomcat 6 you can define a manager class for each webapp, so that you can mix managers in your cluster.
|
||
|
Obviously the managers on one node's application has to correspond with the same manager on the same application on the other node.
|
||
|
If no manager has been specified for the webapp, and the webapp is marked <distributable/> Tomcat will take this manager configuration
|
||
|
and create a manager instance cloning this configuration.
|
||
|
<br>For more info, Please visit the <a href="../config/cluster-manager.html">reference documentation</a>
|
||
|
</p>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
<p>
|
||
|
The channel element is <a href="../tribes/introduction.html">Tribes</a>, the group communication framework
|
||
|
used inside Tomcat. This element encapsulates everything that has to do with communication and membership logic.
|
||
|
<br>For more info, Please visit the <a href="../config/cluster-channel.html">reference documentation</a>
|
||
|
</p>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<Membership className="org.apache.catalina.tribes.membership.McastService"
|
||
|
address="228.0.0.4"
|
||
|
port="45564"
|
||
|
frequency="500"
|
||
|
dropTime="3000"/>
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
<p>
|
||
|
Membership is done using multicasting. Please note that Tribes also supports static memberships using the
|
||
|
<code>StaticMembershipInterceptor</code> if you want to extend your membership to points beyond multicasting.
|
||
|
The address attribute is the multicast address used and the port is the multicast port. These two together
|
||
|
create the cluster separation. If you want a QA cluster and a production cluster, the easiest config is to
|
||
|
have the QA cluster be on a separate multicast address/port combination the the production cluster.<br>
|
||
|
The membership component broadcasts TCP adress/port of itselt to the other nodes so that communication between
|
||
|
nodes can be done over TCP. Please note that the address being broadcasted is the one of the
|
||
|
<code>Receiver.address</code> attribute.
|
||
|
<br>For more info, Please visit the <a href="../config/cluster-membership.html">reference documentation</a>
|
||
|
</p>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
|
||
|
address="auto"
|
||
|
port="5000"
|
||
|
selectorTimeout="100"
|
||
|
maxThreads="6"/>
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
<p>
|
||
|
In tribes the logic of sending and receiving data has been broken into two functional components. The Receiver, as the name suggests
|
||
|
is responsible for receiving messages. Since the Tribes stack is thread less, (a popular improvement now adopted by other frameworks as well),
|
||
|
there is a thread pool in this component that has a maxThreads and minThreads setting.<br>
|
||
|
The address attribute is the host address that will be broadcasted by the membership component to the other nodes.
|
||
|
<br>For more info, Please visit the <a href="../config/cluster-receiver.html">reference documentation</a>
|
||
|
</p>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
|
||
|
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
|
||
|
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
|
||
|
</Sender>
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
<p>
|
||
|
The sender component, as the name indicates is responsible for sending messages to other nodes.
|
||
|
The sender has a shell component, the <code>ReplicationTransmitter</code> but the real stuff done is done in the
|
||
|
sub component, <code>Transport</code>.
|
||
|
Tribes support having a pool of senders, so that messages can be sent in parallel and if using the NIO sender,
|
||
|
you can send messages concurrently as well.<br>
|
||
|
Concurrently means one message to multiple senders at the same time and Parallel means multiple messages to multiple senders
|
||
|
at the same time.
|
||
|
<br>For more info, Please visit the <a href="../config/cluster-sender.html">reference documentation</a>
|
||
|
</p>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
|
||
|
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
|
||
|
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
|
||
|
</Channel>
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
<p>
|
||
|
Tribes uses a stack to send messages through. Each element in the stack is called an interceptor, and works much like the valves do
|
||
|
in the Tomcat servlet container.
|
||
|
Using interceptors, logic can be broken into more managable pieces of code. The interceptors configured above are:<br>
|
||
|
TcpFailureDetector - verifies crashed members through TCP, if multicast packets get dropped, this interceptor protects against false positives,
|
||
|
ie the node marked as crashed even though it still is alive and running.<br>
|
||
|
MessageDispatch15Interceptor - dispatches messages to a thread (thread pool) to send message asynchrously.<br>
|
||
|
ThroughputInterceptor - prints out simple stats on message traffic.<br>
|
||
|
Please note that the order of interceptors is important. the way they are defined in server.xml is the way they are represented in the
|
||
|
channel stack. Think of it as a linked list, with the head being the first most interceptor and the tail the last.
|
||
|
<br>For more info, Please visit the <a href="../config/cluster-interceptor.html">reference documentation</a>
|
||
|
</p>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
|
||
|
filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
<p>
|
||
|
The cluster uses valves to track requests to web applications, we've mentioned the ReplicationValve and the JvmRouteBinderValve above.
|
||
|
The <Cluster> element itself is not part of the pipeline in Tomcat, instead the cluster adds the valve to its parent container.
|
||
|
If the <Cluster> elements is configured in the <Engine> element, the valves get added to the engine and so on.
|
||
|
<br>For more info, Please visit the <a href="../config/cluster-valve.html">reference documentation</a>
|
||
|
</p>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
|
||
|
tempDir="/tmp/war-temp/"
|
||
|
deployDir="/tmp/war-deploy/"
|
||
|
watchDir="/tmp/war-listen/"
|
||
|
watchEnabled="false"/>
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
<p>
|
||
|
The default tomcat cluster supports farmed deployment, ie, the cluster can deploy and undeploy applications on the other nodes.
|
||
|
The state of this component is currently in flux but will be addressed soon. There was a change in the deployment algorithm
|
||
|
between Tomcat 5.0 and 5.5 and at that point, the logic of this component changed to where the deploy dir has to match the
|
||
|
webapps directory.
|
||
|
<br>For more info, Please visit the <a href="../config/cluster-deployer.html">reference documentation</a>
|
||
|
</p>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
|
||
|
</Cluster>
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
<p>
|
||
|
Since the SimpleTcpCluster itself is a sender and receiver of the Channel object, components can register themselves as listeners to
|
||
|
the SimpleTcpCluster. The listener above <code>ClusterSessionListener</code> listens for DeltaManager replication messages
|
||
|
and applies the deltas to the manager that in turn applies it to the session.
|
||
|
<br>For more info, Please visit the <a href="../config/cluster-listener.html">reference documentation</a>
|
||
|
</p>
|
||
|
|
||
|
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Cluster Architecture"><strong>Cluster Architecture</strong></a></font></td></tr><tr><td><blockquote>
|
||
|
|
||
|
<p><b>Component Levels:</b>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
Server
|
||
|
|
|
||
|
Service
|
||
|
|
|
||
|
Engine
|
||
|
| \
|
||
|
| --- Cluster --*
|
||
|
|
|
||
|
Host
|
||
|
|
|
||
|
------
|
||
|
/ \
|
||
|
Cluster Context(1-N)
|
||
|
| \
|
||
|
| -- Manager
|
||
|
| \
|
||
|
| -- DeltaManager
|
||
|
| -- BackupManager
|
||
|
|
|
||
|
---------------------------
|
||
|
| \
|
||
|
Channel \
|
||
|
----------------------------- \
|
||
|
| \
|
||
|
Interceptor_1 .. \
|
||
|
| \
|
||
|
Interceptor_N \
|
||
|
----------------------------- \
|
||
|
| | | \
|
||
|
Receiver Sender Membership \
|
||
|
-- Valve
|
||
|
| \
|
||
|
| -- ReplicationValve
|
||
|
| -- JvmRouteBinderValve
|
||
|
|
|
||
|
-- LifecycleListener
|
||
|
|
|
||
|
-- ClusterListener
|
||
|
| \
|
||
|
| -- ClusterSessionListener
|
||
|
| -- JvmRouteSessionIDBinderListener
|
||
|
|
|
||
|
-- Deployer
|
||
|
\
|
||
|
-- FarmWarDeployer
|
||
|
|
||
|
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
</p>
|
||
|
|
||
|
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="How it Works"><strong>How it Works</strong></a></font></td></tr><tr><td><blockquote>
|
||
|
<p>To make it easy to understand how clustering works, We are gonna take you through a series of scenarios.
|
||
|
In the scenario we only plan to use two tomcat instances <code>TomcatA</code> and <code>TomcatB</code>.
|
||
|
We will cover the following sequence of events:</p>
|
||
|
|
||
|
<ol>
|
||
|
<li><code>TomcatA</code> starts up</li>
|
||
|
<li><code>TomcatB</code> starts up (Wait that TomcatA start is complete)</li>
|
||
|
<li><code>TomcatA</code> receives a request, a session <code>S1</code> is created.</li>
|
||
|
<li><code>TomcatA</code> crashes</li>
|
||
|
<li><code>TomcatB</code> receives a request for session <code>S1</code></li>
|
||
|
<li><code>TomcatA</code> starts up</li>
|
||
|
<li><code>TomcatA</code> receives a request, invalidate is called on the session (<code>S1</code>)</li>
|
||
|
<li><code>TomcatB</code> receives a request, for a new session (<code>S2</code>)</li>
|
||
|
<li><code>TomcatA</code> The session <code>S2</code> expires due to inactivity.</li>
|
||
|
</ol>
|
||
|
|
||
|
<p>Ok, now that we have a good sequence, we will take you through exactly what happens in the session repliction code</p>
|
||
|
|
||
|
<ol>
|
||
|
<li><b><code>TomcatA</code> starts up</b>
|
||
|
<p>
|
||
|
Tomcat starts up using the standard start up sequence. When the Host object is created, a cluster object is associated with it.
|
||
|
When the contexts are parsed, if the distributable element is in place in web.xml
|
||
|
Tomcat asks the Cluster class (in this case <code>SimpleTcpCluster</code>) to create a manager
|
||
|
for the replicated context. So with clustering enabled, distributable set in web.xml
|
||
|
Tomcat will create a <code>DeltaManager</code> for that context instead of a <code>StandardManager</code>.
|
||
|
The cluster class will start up a membership service (multicast) and a replication service (tcp unicast).
|
||
|
More on the architecture further down in this document.
|
||
|
</p><p></p>
|
||
|
</li>
|
||
|
<li><b><code>TomcatB</code> starts up</b>
|
||
|
<p>
|
||
|
When TomcatB starts up, it follows the same sequence as TomcatA did with one exception.
|
||
|
The cluster is started and will establish a membership (TomcatA,TomcatB).
|
||
|
TomcatB will now request the session state from a server that already exists in the cluster,
|
||
|
in this case TomcatA. TomcatA responds to the request, and before TomcatB starts listening
|
||
|
for HTTP requests, the state has been transferred from TomcatA to TomcatB.
|
||
|
In case TomcatA doesn't respond, TomcatB will time out after 60 seconds, and issue a log
|
||
|
entry. The session state gets transferred for each web application that has distributable in
|
||
|
its web.xml. Note: To use session replication efficiently, all your tomcat instances should be
|
||
|
configured the same.
|
||
|
</p><p></p>
|
||
|
</li>
|
||
|
<li><B><code>TomcatA</code> receives a request, a session <code>S1</code> is created.</B>
|
||
|
<p>
|
||
|
The request coming in to TomcatA is treated exactly the same way as without session replication.
|
||
|
The action happens when the request is completed, the <code>ReplicationValve</code> will intercept
|
||
|
the request before the response is returned to the user.
|
||
|
At this point it finds that the session has been modified, and it uses TCP to replicata the
|
||
|
session to TomcatB. Once the serialized data has been handed off to the operating systems TCP logic,
|
||
|
the request returns to the user, back through the valve pipeline.
|
||
|
For each request the entire session is replicated, this allows code that modifies attributes
|
||
|
in the session without calling setAttribute or removeAttribute to be replicated.
|
||
|
a useDirtyFlag configuration parameter can be used to optimize the number of times
|
||
|
a session is replicated.
|
||
|
</p><p></p>
|
||
|
|
||
|
</li>
|
||
|
<li><b><code>TomcatA</code> crashes</b>
|
||
|
<p>
|
||
|
When TomcatA crashes, TomcatB receives a notification that TomcatA has dropped out
|
||
|
of the cluster. TomcatB removes TomcatA from its membership list, and TomcatA will no longer
|
||
|
be notified of any changes that occurs in TomcatB.
|
||
|
The load balancer will redirect the requests from TomcatA to TomcatB and all the sessions
|
||
|
are current.
|
||
|
</p><p></p>
|
||
|
</li>
|
||
|
<li><b><code>TomcatB</code> receives a request for session <code>S1</code></b>
|
||
|
<p>Nothing exciting, TomcatB will process the request as any other request.
|
||
|
</p><p></p>
|
||
|
</li>
|
||
|
<li><b><code>TomcatA</code> starts up</b>
|
||
|
<p>Upon start up, before TomcatA starts taking new request and making itself
|
||
|
available to it will follow the start up sequence described above 1) 2).
|
||
|
It will join the cluster, contact TomcatB for the current state of all the sessions.
|
||
|
And once it receives the session state, it finishes loading and opens its HTTP/mod_jk ports.
|
||
|
So no requests will make it to TomcatA until it has received the session state from TomcatB.
|
||
|
</p><p></p>
|
||
|
</li>
|
||
|
<li><b><code>TomcatA</code> receives a request, invalidate is called on the session (<code>S1</code>)</b>
|
||
|
<p>The invalidate is call is intercepted, and the session is queued with invalidated sessions.
|
||
|
When the request is complete, instead of sending out the session that has changed, it sends out
|
||
|
an "expire" message to TomcatB and TomcatB will invalidate the session as well.
|
||
|
</p><p></p>
|
||
|
|
||
|
</li>
|
||
|
<li><b><code>TomcatB</code> receives a request, for a new session (<code>S2</code>)</b>
|
||
|
<p>Same scenario as in step 3)
|
||
|
</p><p></p>
|
||
|
|
||
|
|
||
|
</li>
|
||
|
<li><code>TomcatA</code> The session <code>S2</code> expires due to inactivity.
|
||
|
<p>The invalidate is call is intercepted the same was as when a session is invalidated by the user,
|
||
|
and the session is queued with invalidated sessions.
|
||
|
At this point, the invalidet session will not be replicated across until
|
||
|
another request comes through the system and checks the invalid queue.
|
||
|
</p><p></p>
|
||
|
</li>
|
||
|
</ol>
|
||
|
|
||
|
<p>Phuuuhh! :)</p>
|
||
|
|
||
|
<p><b>Membership</b>
|
||
|
Clustering membership is established using very simple multicast pings.
|
||
|
Each Tomcat instance will periodically send out a multicast ping,
|
||
|
in the ping message the instance will broad cast its IP and TCP listen port
|
||
|
for replication.
|
||
|
If an instance has not received such a ping within a given timeframe, the
|
||
|
member is considered dead. Very simple, and very effective!
|
||
|
Of course, you need to enable multicasting on your system.
|
||
|
</p>
|
||
|
|
||
|
<p><b>TCP Replication</b>
|
||
|
Once a multicast ping has been received, the member is added to the cluster
|
||
|
Upon the next replication request, the sending instance will use the host and
|
||
|
port info and establish a TCP socket. Using this socket it sends over the serialized data.
|
||
|
The reason I choose TCP sockets is because it has built in flow control and guaranteed delivery.
|
||
|
So I know, when I send some data, it will make it there :)
|
||
|
</p>
|
||
|
|
||
|
<p><b>Distributed locking and pages using frames</b>
|
||
|
Tomcat does not keep session instances in sync across the cluster.
|
||
|
The implementation of such logic would be to much overhead and cause all
|
||
|
kinds of problems. If your client accesses the same session
|
||
|
simultanously using multiple requests, then the last request
|
||
|
will override the other sessions in the cluster.
|
||
|
</p>
|
||
|
|
||
|
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="Monitoring your Cluster with JMX"><strong>Monitoring your Cluster with JMX</strong></a></font></td></tr><tr><td><blockquote>
|
||
|
<p>Monitoring is a very important question when you use a cluster. Some of the cluster objects are JMX MBeans </p>
|
||
|
<p>Add the following parameter to your startup script with Java 5:
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
set CATALINA_OPTS=\
|
||
|
-Dcom.sun.management.jmxremote \
|
||
|
-Dcom.sun.management.jmxremote.port=%my.jmx.port% \
|
||
|
-Dcom.sun.management.jmxremote.ssl=false \
|
||
|
-Dcom.sun.management.jmxremote.authenticate=false
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
</p>
|
||
|
<p>Activate JMX with JDK 1.4:
|
||
|
<ol>
|
||
|
<li>Install the compat package</li>
|
||
|
<li>Install the mx4j-tools.jar at common/lib (use the same mx4j version as your tomcat release)</li>
|
||
|
<li>Configure a MX4J JMX HTTP Adaptor at your AJP Connector<p></p>
|
||
|
<div align="left"><table cellspacing="4" cellpadding="0" border="0"><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#ffffff" height="1"><pre>
|
||
|
<Connector port="${AJP.PORT}"
|
||
|
handler.list="mx"
|
||
|
mx.enabled="true"
|
||
|
mx.httpHost="${JMX.HOST}"
|
||
|
mx.httpPort="${JMX.PORT}"
|
||
|
protocol="AJP/1.3" />
|
||
|
</pre></td><td bgcolor="#023264" width="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr><tr><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td><td bgcolor="#023264" width="1" height="1"><img src="./../images/void.gif" width="1" height="1" vspace="0" hspace="0" border="0"></td></tr></table></div>
|
||
|
</li>
|
||
|
<li>Start your tomcat and look with your browser to http://${JMX.HOST}:${JMX.PORT}</li>
|
||
|
<li>With the connector parameter <code>mx.authMode="basic" mx.authUser="tomcat" mx.authPassword="strange"</code> you can control the access!</li>
|
||
|
</ol>
|
||
|
</p>
|
||
|
<p>
|
||
|
List of Cluster Mbeans<br>
|
||
|
<table border="1" cellpadding="5">
|
||
|
|
||
|
<tr>
|
||
|
<th align="center" bgcolor="aqua">Name</th>
|
||
|
<th align="center" bgcolor="aqua">Description</th>
|
||
|
<th align="center" bgcolor="aqua">MBean ObjectName - Engine</th>
|
||
|
<th align="center" bgcolor="aqua">MBean ObjectName - Host</th>
|
||
|
</tr>
|
||
|
|
||
|
<tr>
|
||
|
<td>Cluster</td>
|
||
|
<td>The complete cluster element</td>
|
||
|
<td><code>type=Cluster</code></td>
|
||
|
<td><code>type=Cluster,host=${HOST}</code></td>
|
||
|
</tr>
|
||
|
|
||
|
<tr>
|
||
|
<td>DeltaManager</td>
|
||
|
<td>This manager control the sessions and handle session replication </td>
|
||
|
<td><code>type=Manager,path=${APP.CONTEXT.PATH}, host=${HOST}</code></td>
|
||
|
<td><code>type=Manager,path=${APP.CONTEXT.PATH}, host=${HOST}</code></td>
|
||
|
</tr>
|
||
|
|
||
|
<tr>
|
||
|
<td>ReplicationValve</td>
|
||
|
<td>This valve control the replication to the backup nodes</td>
|
||
|
<td><code>type=Valve,name=ReplicationValve</code></td>
|
||
|
<td><code>type=Valve,name=ReplicationValve,host=${HOST}</code></td>
|
||
|
</tr>
|
||
|
|
||
|
<tr>
|
||
|
<td>JvmRouteBinderValve</td>
|
||
|
<td>This is a cluster fallback valve to change the Session ID to the current tomcat jvmroute.</td>
|
||
|
<td><code>type=Valve,name=JvmRouteBinderValve,
|
||
|
path=${APP.CONTEXT.PATH}</code></td>
|
||
|
<td><code>type=Valve,name=JvmRouteBinderValve,host=${HOST},
|
||
|
path=${APP.CONTEXT.PATH}</code></td>
|
||
|
</tr>
|
||
|
|
||
|
</table>
|
||
|
</p>
|
||
|
</blockquote></td></tr></table><table border="0" cellspacing="0" cellpadding="2"><tr><td bgcolor="#525D76"><font color="#ffffff" face="arial,helvetica.sanserif"><a name="FAQ"><strong>FAQ</strong></a></font></td></tr><tr><td><blockquote>
|
||
|
<p>Please see <a href="http://tomcat.apache.org/faq/cluster.html">the clustering section of the FAQ</a>.</p>
|
||
|
</blockquote></td></tr></table></td></tr><!--FOOTER SEPARATOR--><tr><td colspan="2"><hr noshade="noshade" size="1"></td></tr><!--PAGE FOOTER--><tr><td colspan="2"><div align="center"><font color="#525D76" size="-1"><em>
|
||
|
Copyright © 1999-2006, Apache Software Foundation
|
||
|
</em></font></div></td></tr></table></body></html>
|