Trilogix Cloud

Using Docker to deploy E-Commerce/CMS

CMS Web-Site Management Content and Containerisation – Example Concept

 

Problem statement:  Deploy many websites, from a master copy, to various nodes, using an on-demand, or near on-demand model.  Example: one master site www.big-retail-firm.com needs to replicate their site to www.big-retail-firm-in-many-countries.com / .de, .fr., .es, .ru etc but in the same master copy language (no translation or integration with this particular use case).

 

Configuration files would automate the domain/url/DNS infrastructure, along with deploying the master node copy (and associated libraries using a parent-child pattern), files, properties, content and language, to the new domain.  These scripts would need to be built, maintained and managed – python, java script, ruby and any number of tools can be used.

 

Use Case:  CI/CD and new version updates would need to be integrated into this approach, which looks mainly at deploying a master library set to new nodes in order to facilitate an on-demand website creation model.  This contrasts with the usual set of steps including manual intervention which means that the new site deployment can take months.

 

Ideal platform setup:  Virtual Private Cloud in AWS or similar

 

CMS platform:  Magnolia, OpenCMS or similar Java based CMS

 

Docker

Container for all the various tiers to be encapsulated into appropriate microservices or components.  These run within Docker containers that can be orchestrated together to deliver a full platform.

 

Docker Image

The Docker images contain everything needed to run the application. No need to download and then proceed through installation steps.  Runtime configurations are provided by Docker Compose which allows a single run-command. Docker will download the CMS images for you, launch the containers and wire the containers together for using Docker Compose.

 

Components

Use Docker Machine, Docker Swarm and a Docker Engine for provisioning and running production hosted environments.  Take advantage of Auto Scaling groups for production scale out.

 

Environments

The design of a given environment will vary depending on the needs. Production environments, for example, should be clustered to provide some assurance of fault tolerance and failover where as non-production environments may be smaller and more volatile in nature.

 

The same images can be used across all environments but may be configured differently upon being instantiated into containers. Containers are “instantiated images” (with a configuration) where an image is a template that can be stamped out using Docker to construct the container and bring it online.

 

Images

The following images are shipped as part of a Magnolia release for example:

  • API Server (cms/api-server)
  • UI Server (cms/ui-server)
  • Virtual Server (cms/virtual-server)
  • Web Server (cms/webshot-server)

 

Kits

Kits use Docker Compose to describe the environments and provide sample files.

Many kits use third-party Docker images which a typical CMS does not produce or ship. These could include the official images for DBs and Search HA Proxy, ZooKeeper and Redis etc.

  • HA Proxy is used as a load balancer
  • ZooKeeper is used to demonstrate automatic cluster configuration on non-AWS environments (such as Rackspace or purely on-premise)
  • Redis is used as a backend provider for the Node cluster’s distributed cache

 

 

Environments and Architectures

Simple

This is a non-production environment that you can launch on a laptop or a small server for development, staging, QA or other test purposes. In addition, you may choose to launch a DB and Elastic Search for example, as containers or have those servers hosted externally. This environment allows developers to use the API and log into the user interface to work with content and schemas within the graphical front-end.

 

Basic (API Only)

This non-production environment is similar to the previous one but leaves off the user interface. This is common for scenarios where you solely wish to run operations against the CMS API (such as in an embedded case or perhaps as part of an application test runner). In addition, you may choose to launch a DB and Elastic Seach as containers or you may have those servers hosted externally.

 

Standard (All Images)

A cms-appserver image could provide a runtime with Node.js and other OS dependencies pre-installed so that custom Node applications can be launched straight away. These custom Node applications can utilize the functionality of the cms-server node module to produce web sites with static caching, CDN compatibility, web content management page generation and more.

 

Clustered

This is a production environment where the API servers are clustered. This means that there are multiple API servers running and they’re aware of one another. A distributed cache, lock and messaging system operates across the cluster allowing any of the API cluster members to serve requests equally.

 

A load balancer runs ahead of the API servers and distributes requests across them. In an AWS environment, this is typically an AWS Elastic Load Balancer or if using a kit something like HA Proxy maybe used. In addition, the UI tier and Web tier are clustered. In the former case, the underlying cms-server Node module provides Redis or another cache support to enable the cluster-wide distributed cache.  In all cases, a load balancer runs ahead of the cluster.

 

The API server is architected to be stateless and fast.  API servers are designed to fail fast – they can be spun up or spun down and the cluster automatically detects the new or departing members and redistributes state accordingly.

 

Clustered (with DB of choice and Elastic Search scale out)

This is a production environment where the CMS database layers are scaled out for redundancy and throughput. The DB tier is configured to use replica sets and/or sharding. If sharding, separate DBs process and DBoc processes exist to coordinate connectivity to the correct shard and/or replica.

 

The Elastic Search tier is also configured for clustering. The clustering mechanics for Elastic Search are very similar to those of the CMS app. Elastic Search servers provide nodes that may join or leave an Elastic Search cluster at any time. Elastic Search cluster state is automatically redistributed on the fly.

 

Clustered (with dedicated API workers)

Another common production environment is to take the total set of API workers and partition them into the following groups:

  • Workers
  • Web Request Handlers

 

In this configuration, the load balancer distributes live requests to the Web Request Handlers. The Web Request Handlers may put asynchronous jobs into the distributed work queue to handle incoming web requests. Workers consist of API servers that are not connected to the load balancer. Instead, they wait for jobs to become available and then work on them.

 

The distinction between Workers and Web Request Handlers allows fine tuning of production to match Web Request Handler capacity with API traffic demand. Instances serving live web requests will generally require more CPU whereas the Workers will generally favour more memory intensive boxes.

 

==END

Leave a Comment

Your email address will not be published. Required fields are marked *