Container Clustering with Rancher Server (Part 6) – Creating and deploying custom catalog items for GoCD

This post was written by Rich Bosomworth.


Discover the acclaimed Team Guides for Software – practical books on operability, business metrics, testability, releasability

This is the sixth post in an on-going series exploring Rancher Server deployment, configuration and extended use. In the last post I detailed how to automate the deployment of AWS infrastructure and Rancher with Terraform. In this post we look at creating custom catalog items for GoCD using docker-compose and rancher-compose. We will also progress our Terraform plan to deploy Rancher behind an SSL enabled AWS Elastic Load Balancer.

Experience with AWS and and an awareness of Terraform is assumed, along with an understanding of all Rancher related systems from previous posts. Content is best viewed from a desktop or laptop.

Series Links

Container Clustering with Rancher Server (Part 1) – Local Server Installation on Linux using Vagrant Host Nodes

Container Clustering with Rancher Server (Part 2) – Single Node Resilience in AWS

Container Clustering with Rancher Server (Part 3) – AWS EFS mounts using Rancher-NFS

Container Clustering with Rancher Server (Part 4) – Deploying and Maintaining Containerised GoCD Continuous Delivery

Container Clustering with Rancher Server (Part 5) – Automating the deployment of AWS infrastructure and Rancher with Terraform

Container Clustering with Rancher Server (Part 6) – Creating and deploying custom catalog items for GoCD

Container Clustering with Rancher Server (Part 7) – Stack and service build out to create a custom catalog item for Splunk


Video overview

This video provides a top level overview of the main deployment points. For a more detailed process review please refer to the deep dive section that follows.

Deep dive

In Part 4 we deployed GoCD with HA using EBS volumes. As of v17.3.0, the Thoughtworks GoCD development team have updated the docker image format. Rancher have also updated their load balancer recommendations. We will cover necessary revisions to facilitate these updates, along with enhancements for creating custom catalog items using the official Thoughtworks Docker images for GoCD.

Let’s take a look at the changes.

GoCD updates

  • The Docker image for GoCD server is now based on Alpine Linux.
  • The Docker images for GoCD agents are now OS specific (e.g. Ubuntu 16.04).
  • Agent registration using a fixed reg key is no longer automatic by default.
  • There is now a single env var for JVM options (memory, heap etc).
    docker run -e GO_SERVER_SYSTEM_PROPERTIES="-Xmx4096mb -Dfoo=bar" gocd/gocd-server
  • The directory structure within the GoCD server container is now a combined location for GoCD files.
/godata/addons
/godata/artifacts
/godata/config
/godata/db
/godata/logs
/godata/plugins
/home/go

Rancher updates

With these changes in mind, let’s take a look at our revised Terraform plan. Or, for those who wish to dive straight in, the Git repo is here and contains an installation README.

Rancher for GoCD

As we have already explored the ALB Terraform plan deployment process in previous posts, I will detail only changes and enhancements.

  • Removed alb.tf
  • Added elb.tf – Referencing configuration for proxy protocol mode as advised here.
  • Added iam.tf – To facilitate IAM role(s) for instances (Useful for the Rancher Route53 plugin).
  • Removed hosts.tf
  • Added gocdsrv_hst.tf and gocdagt_hst.tf, with corresponding userdata template files – Updated to provide dedicated ‘warm standby’ hosts for enhanced HA.

The Rancher documentation is very good with regard to configuration requirements for the ELB and includes resource examples for Terraform. We enhanced our Terraform plan with the inclusion of a variable for the AWS Certificate Manager SSL ARN on the listener.

resource "aws_elb" "rancher" {
 name = "${var.env_name}-rancher"
 subnets = ["${aws_subnet.pub_a.id}", "${aws_subnet.pub_b.id}"]
 security_groups = ["${aws_security_group.rancher_elb.id}"]

 listener {
  instance_port = 8080
  instance_protocol = "tcp"
  lb_port = 443
  lb_protocol = "ssl"
  ssl_certificate_id = "${var.ssl_arn}"
  }
}

resource "aws_proxy_protocol_policy" "websockets" {
 load_balancer = "${aws_elb.rancher.name}"
 instance_ports = ["8080"]

In the next section we will examine the process of creating and deploying our custom catalog items for GoCD server and GoCD agents.

Custom catalog

The Rancher catalog “…provides a catalog of application templates that make it easy to deploy complex stacks”. The catalog consists of two default componentsThere is the Library catalog, which contains Rancher certified templates, and the community catalog. You can also add a custom or private catalog. Forking the community catalog provides an excellent reference for developing your own custom catalog items.

Our custom catalog is here. To install, add as shown in Fig:1 from Admin > Settings > Catalog

Fig:1

GoCD service

stcl-build-eng

EBS data volume 

To retain HA for our GoCD deployment we continue to implement an EBS volume, however in order to streamline the deployment process we now specify the volume mount within our docker-compose and rancher-compose files. This method requires the Rancher-EBS plugin to be pre-installed and an EBS volume stated for creation at stack launch.

The volume section of our GoCD server docker-compose is as follows:

volumes:
 - ${data_volume}:/godata
 volume_driver: ${volume_driver}

Variables are referenced from the rancher-compose file:

- variable: "data_volume"
  description: "Volume to save goserver data"
  label: "Data volume:"
  required: true
  default: "ebs"
  type: "string"
- variable: "volume_driver"
  description: "The volume driver type"
  label: "Volume driver:"
  default: "rancher-ebs"
  required: true
  type: "string"

Stating the EBS volume prior to deploying GoCD is done from within Rancher via Infrastructure > Storage > Add Volume (Fig:2)

ebs-vol-create

Fig:2

When the GoCD server service stack is launched, the EBS mount can be verified as active under Infrastructure > Storage (Fig:3)

ebs-vol-active

Fig:3

Dedicated hosts

To facilitate HA for service relaunch, we create x2 (dedicated) host scheduling rules for both GoCD server and GoCD agents. Rules are created by upgrading the service. Creation method for the GoCD server rule is shown in Fig:4

gocd-srv-rule

Fig:4

If an underlying EC2 host running GoCD server becomes unavailable, Rancher will launch a replacement on the warm standby (dedicated host). The existing EBS volume will remount, containing all /godata files. The GoCD server and agent state will be restored, including information for registered hosts.

GoCD agents

Auto-registration of agents is a two-stage process. The GoCD server component must be installed first in order to view the Config XML and obtain the agent auto registration key (Fig:5).

gocd-agent-key

Fig:5

The registration key is then input during launch of the GoCD agent service item (Fig:6).

gocd-agt-key

Fig:6

With scheduling rules in place and the service upgraded, GoCD agents will launch on dedicated hosts (Fig:7)

gocd-agt-hosts

Fig:7

The GoCD agents will show as registered in both the GoCD server UI (Fig:8), and within the server Config XML (Fig:9).

gocd-agents-idle

Fig:8

gocd-agents-registered

Fig:9

As previously demonstrated in Part 4 of this series, the Rancher AWS Route R53 plugin will add an extra level of HA for GoCD DNS. Simply install the plugin and configure a R53 CNAME record for the corresponding service A record (Fig:10).

Fig:10

Should you require further information or assistance with any aspects of this post, AWS, Terraform, Rancher, or any other services from the website, please feel free to comment, or get in touch directly via the methods detailed on our contact page.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: