This post was written by Rich Bosomworth.

Discover the acclaimed Team Guides for Software – practical books on operability, business metrics, testability, releasability
This is the seventh post in an on-going series exploring Rancher Server deployment, configuration and extended use. In the last post I detailed how to create and deploy custom catalog items for GoCD. This post shows how we built out stack services from a docker-compose file to create a custom catalog item for Splunk Enterprise Monitor.
Splunk Enterprise is a leading platform for aggregating analytical big data to enhance business intelligence. Creating a custom catalog item for Rancher provides an easy deployment solution.
An understanding of all Rancher related systems from previous posts is assumed. Content is best viewed from a desktop or laptop.
Series Links
Container Clustering with Rancher Server (Part 2) – Single Node Resilience in AWS
Container Clustering with Rancher Server (Part 3) – AWS EFS mounts using Rancher-NFS
Background
We were recently doing some work for a client that uses Splunk for logging and metrics. In the course of our work – building out deployment pipelines – we needed to be able to test the integration of the deployment pipeline system with Splunk, so we decided to use Rancher to simplify the deployment of Splunk components for testing purposes.
Build environment
Prior to build out we re-identified the benefits that a fully localised Rancher development environment would provide. In a previous post we detailed how to create a LAN server installation on Linux using Vagrant host nodes. To provide a fully mobile development environment for Rancher we created a simpler and more self-contained Vagrant solution. This new solution will run on a single laptop and was used to develop and test the Splunk deployment. It is available for you to download and use from our git repository.
Build overview
Rancher catalog items contain both docker-compose and rancher-compose yml files. The docker-compose file details the multi-container services, while the rancher-compose file dictates options for multi-host deployment, as well as providing input scope for environment variables when deployed as a catalog item.
As the official Docker hub repo for Splunk provides a comprehensive docker-compose file it was a relatively straightforward process to port into a Rancher stack service.
There are two Splunk Docker images available, standard enterprise, and enterprise monitor. The enterprise monitor version comes with various data inputs pre-activated (e.g., file monitor of docker host JSON logs, HTTP Event Collector, Syslog, etc.). It also includes the Docker app which includes dashboards to help analyse collected logs and docker information. We decided to use this more comprehensive version as the basis for our catalog item.

Method
Test locally
It is recommended to first verify any docker-compose deployment ‘stand alone’ (i.e using only docker/docker-compose). Information on how to work with Docker Compose is available from the Docker document resource.
The docker-compose file provided by Splunk from the Docker hub deployed smoothly with no modifications required.
Stack creation
With docker-compose functionality verified, the next stage was to create a Rancher Stack (Fig:1).

Fig:1
Add services
Within the stack is where we port in named services from the docker-compose file (Fig:2). In this case there are two services, splunk and vsplunk.

Fig:2
A crucial aspect at this stage is the creation (if required) of a ‘sidekick’ container (or containers). The notion of sidekick containers is a Rancher term. It relates to a container with supporting services for the primary (container), such as networking or shared volumes/volumes from. Sidekicks scale with the primary (service) attachment and are always deployed on the same host.
For example, the Splunk docker-compose stack utilises ‘volumes from’ the vsplunk service busybox container, as such a sidekick relationship is required to open up the ‘volumes from’ option within the service configuration (Fig:3).

Fig:3
It is within this service configuration portal that all other entries from the docker-compose file are detailed, for example (fixed) environment variables (Fig:4).

Fig:4
Create and deploy the service
With all options populated and matched to entries from the docker-compose file, the stack can be created. On creation the stack will activate. If activation is successful (further verified by accessing Splunk on http://<host_ip>:8000) the next stage is to take the Rancher created docker-compose and rancher-compose files (Fig:5 & Fig:6) and port them into a custom catalog configuration. Note that the rancher-compose file is in a quite simple state at this stage of the build out process.

Fig:5 (docker-compose)

Fig:6 (rancher-compose)
Creating the catalog item
The folder structure for catalog items follows a fixed structure. For our Splunk item it is as follows (Fig:7):

Fig:7
We sourced a suitable PNG image and populated both the README, and the config.yml (Fig:8):

Fig:8
The catalog ‘magic’ happens from additions to the rancher-compose file which enable deployment as a catalog item. In our modified rancher-compose file (Fig:9) you can see that as well as the catalog descriptor, we have also added basic health check entries for the splunk service.

Fig:9
With all the files created, you can push the template into your custom catalog repo and test deployment (Post #6 in this series details how to add a custom catalog to your Rancher server).
Although still in the experimental stage, our Splunk Enterprise Monitor catalog item as created in this post is available in the SkeltonThatcher rancher-buildeng-catalog. We welcome any contributions for improvement and will be progressing development ourselves with a view to enhancing resilience, as we have done with our HA stack for GoCD.
Should you require further information or assistance with any aspects of this post, AWS, Terraform, Rancher, or any other services from the website, please feel free to comment, or get in touch directly via the methods detailed on our contact page.