This post was written by Rich Bosomworth.

Discover the acclaimed Team Guides for Software – practical books on operability, business metrics, testability, releasability
This is the fourth post in an on-going series exploring Rancher Server deployment, configuration and extended use. In the last post I detailed how to add an extra level of HA by configuring an AWS EFS volume for shared storage and mounting it using the Rancher-NFS service. In this post we focus on GoCD. GoCD is an open source continuous delivery server developed by Thoughtworks. We adapt the previous configuration with AWS EBS volumes to enable single node HA for GoCD server. We will also be deploying auto-scaled self registering GoCD agents.
Experience with AWS is assumed, along with an understanding of all Rancher related systems from previous posts. Content is best viewed from a desktop or laptop.
Series Links
Container Clustering with Rancher Server (Part 2) – Single Node Resilience in AWS
Container Clustering with Rancher Server (Part 3) – AWS EFS mounts using Rancher-NFS
Build Architecture
Fig:1 represents the underlying infrastructure. It is based on the architecture used in Part 3, but with the following modifications:
- Single Availability Zone for Rancher Hosts
- Rancher AWS EBS Plugin
Fig:1
Fig:2 represents the GoCD deployment layer. This layer sits above the underlying infrastructure. As the post continues we shall examine and explain the configuration and components used.
Fig:2
GoCD and EBS
GoCD server installs its files in the following locations on your file system (i.e. within the Docker container):
/var/lib/go-server #contains the binaries and database
/etc/go #contains the pipeline configuration files
/var/log/go-server #contains the server logs
/usr/share/go-server #contains the start script
/etc/default/go-server #contains all the environment variables with default values.
In absence of an external DB connector (i.e. for AWS RDS) and to facilitate HA, a method of replicating the contents of these locations is required, specifically for /var/lib/go-server and /etc/go.
Tests with the Rancher NFS plugin proved ineffective for different mount points to the same instance. For this reason the Rancher EBS plugin was chosen as it provides options for multi-volume mounts.
As EBS volumes are zone specific the GoCD server host must therefore be single-AZ, in order to facilitate EBS volume re-attach in the event of a container or host failure.
*NOTE* – The method for configuring the Rancher EBS plugin and mounting volumes is different to the procedure for Rancher EFS.
EBS Plugin Configuration
With the EBS plugin installed, within Rancher (under infrastructure/storage) declare x2 storage volumes (Fig:3). We will name our volumes eb1 and ebs2. It is important to specify both the volume type and the volume size.
Fig:3
GoCD Server Service
The GoCD server and agent Rancher catalog items are developed and maintained by Raúl Sánchez. We have forked the catalog and modified the GoCD service to accommodate the latest Docker builds. Our forked repository can be added to your own Rancher build as a Custom Catalog.
*NOTE* – At time of writing GoCD v17+ Docker agents do not auto-register with the GoCD server when deployed within Rancher. The methods in this post are based on v16.12.0.
As shown from Fig:1 and Fig:2, we use separate Rancher hosts for both the GoCD server and the GoCD agents. This is to facilitate single-AZ HA for the GoCD server via EBS volumes. Both sets of hosts are labelled accordingly when deployed (i.e. gosrvhst and goagthst). This enables fixed deployment of each service to the relevant host (Fig:4).
Fig:4
Adding the EBS volumes and mount points for GoCD server is done so via Volumes/Add Volume (Fig:5).
Fig:5
With the GoCD service upgraded and running on gosrvhst, successful EBS volume mounts will be shown as active under Storage Drivers (Fig:6). You can also view the EBS volumes as created in-situ from the EC2 console (Fig:7).
Fig:6
Fig:7
GoCD Agents
GoCD agents are launched via the same route as the GoCD server service, selecting the host label via scheduling (i.e. goagthst) and ‘upgrading’ the service. As the GoCD agents need to link to the GoCD server service it is important to make sure the stack service is referenced in the configuration options (Fig:8)
Fig:8
When launched, the GoCD agents will auto-register and show as enabled with the GoCD server (Fig:9)
Route53 DNS Plugin
To enable consistency of access in the event of a host or container failure, we use the Rancher Route 53 plugin. The plugin is installed via the Rancher catalog and creates dynamic R53 DNS entries for public facing services. Our method is to create a fixed CNAME record for the dynamic GoCD server service A record (Fig:10)
Fig:10
In the next post we will look at automating deployment of the AWS infrastructure and both the Rancher server and Rancher hosts using Terraform.
If you would like further information on any of the methods and systems detailed in this post, please feel free to get in touch via the comments section, or through our contact page.
The reference material contained within this post expands on the presentation delivered by Matthew Skelton at Continuous Delivery Amsterdam.