Resin Documentationapp server |
deploying web-applications to a cluster
When you deploy an application, Resin ensures each server in the cluster gets a copy of the new application, using a transactional store to ensure consistency. Cluster DeploymentTo deploy an application to your cluster, use the same command-line deploy as you would for a single server. The deployment process is the same because Resin treats a standalone server as single server in a cluster. unix> resinctl deploy test.war That command-line deploy will send the test.war to the cluster's triad-server repository, and then copy the repository to all three servers in the triad hub. If you have only two servers in the cluster, Resin will copy the application to both. Once all three triad hub servers have the deployed .war, Resin will update all the spoke servers in the cluster. The cluster command-line deployment uses the <web-app-deploy> tag in the resin.xml to configure and control where the deployed application should be expanded. Typically, the deployment will use the webapps/ directory. <resin xmlns="http://caucho.com/ns/resin"> <cluster id=""> ... <host id=""> <web-app-deploy path="webapps" expand-preserve-fileset="WEB-INF/work/**"/> </host> </cluster> </resin> When you're using virtual hosts, you'll add a -host tag to specify the virtual host to deploy to. The default deployment is to the default host with the war's name as a prefix. Both can be changed with deploy options. unix> resinctl deploy -host www.foo.com test.war Controlling RestartsBy default, a Resin server will detect an updated application automatically and restart the web-app immediately. You can delay the restart by putting it on manual control. In manual mode, Resin will only look for a new version when you use a command-line webapp-restart. unix> resinctl webapp-restart test The manual control is configured by setting <restart-mode< to manual in the web-app-deploy: <resin xmlns="http://caucho.com/ns/resin"> <cluster id=""> <host id=""> <web-app-deploy path="webapps" restart-mode="manual" expand-preserve-fileset="WEB-INF/work/**"/> </host> </cluster> </resin> Zero Downtime Deployment (Versioning)You can configure Resin's cluster deployment in a versioning mode where users gracefully upgrade to your new application version. Since new user sessions use the new version and old user sessions use the old application version, users will not need to be aware of the version upgrade. By default, Resin restarts the web-app on a new deployment, destroying the current user sessions before starting them on the new deployment. You can change that behavior by setting multiversion-routing to true and deploying with a -version command-line option. <resin xmlns="http://caucho.com/ns/resin"> <cluster id=""> <host id=""> <web-app-deploy path="webapps" multiversion-routing="true" expand-preserve-fileset="WEB-INF/work/**"/> </host> </cluster> </resin> For versioning to work, you'll deploy with a named version of your application. Resin will send new sessions to the most recent version and leave old sessions on the previous version. unix> resinctl deploy -version 2.1.3 test.war Internally, the application repository has both versions active. production/webapp/default/test-2.1.2 production/webapp/default/test-2.1.3 Resin's deployment system is designed around several reliability requirements. Although the user-visible system is simple, the underlying architecture is sophisticated -- we're not just copying .war files.
The following is a description of the underlying architecture of Resin's deployment system. It's not necessary to understand or even read any of this section to use Resin's deployment. But for those who are curious, some details might be interesting. .git control system architectureThe main repository is based on the distributed version control system .git, which is used for large programming projects like the Linux kernel. The .git format gives Resin the key transactional repository to make the cloud deployment reliable. Each file in the repository is stored by its secure document hash (SHA-1). The secure hash lets Resin verify that a file is completely copied without any corruption. If verifying the hash fails, Resin will recopy the file from the triad or from the deploy command. Since the file is not saved until it's validated, Resin can guarantee that the file contents are correct. Files are never overwritten in in Resin's repository. It's essentially write-only. Two versions of the same file are saved as two separate file: a test.jsp (version 23) which replaces a test.jsp (version 22). So there's never a case where an older version of the file can be partially overwritten. Since the repository itself is organized as a .git self-validating file, its own updates are validated before any changes occur. Essentially, Resin verifies every file in a repository update, and then verifies every directory, and then verifies the repository itself before making any changes visible.
If at any point a servers stops, or the network fails, or a new file is corrupted in a partial transfer, Resin continues to use the old files. On recovery, Resin will verify and delete any partially copied files, and continue the repository update. Only the repository system itself knows that an update is in process; the rest of Resin continues to use the old repository files. Repository tag systemInternally, the repository is organized by tags where each tag is an archive like a .war. Cloud DeploymentDeploying to a cloud extends the transactional repository to all the servers in a cluster. In Resin's replicated hub-and-spoke model, a deployment copies the archive first to all three servers in the triad. (If you have two servers, it will copy to the second server.) Since all three servers have a copy of the entire repository, your system keeps it reliability even if one server is down for maintenance and a second server restarts for an unexpected reason. After all three servers in the hub have received and verified the deployment update, the triad hub can send the changes to all of the spoke servers. If a spoke server restarts or a new spoke server is added to the cloud dynamically, it will contacts the hub for the most recent repository version. So even a new virtual-machine image can receive the most recent deployments without intervention.
|