On a recent project I’ve been working to standardise the deployment infrastructure across an assortment of Ruby, PHP and Java apps. Some on their own boxes, some using boxes shared across projects.
It was decided to standardise the deployment process using Docker Images. I’ll blog about the deployment side in the future but this post is about automating building those Docker Images.
After some research it was decided to move all the repositories to GitLab. I’d used GitLab for some of my own projects previously and can whole heartly recommend it.
As with most CI systems, the actual build gets triggered by the presence of a
YAML file, called
.gitlab-ci.yml, describing what should be run and how.
GitLab’s CI system runs all the commands from within a Docker Image. You can specify the actual image - this can be something standardised like the minimal Ubuntu or Debian images or one of your own images held within a registry you control.
I’ve seen some blogs where developers run the tests, then build a Docker Image only if all tests pass. To me this seems self defeating - the Docker Image is there to provide a consistent environment, so tests should be run from within that same environment.
This means we have a 3 step process.
- Build the git repo into a Docker Image
- Run unit tests from within the Docker Image, if the project has a test suite
- If you want to deploy the image, then upload it to a Registry with an appropriate tag.
Building the Image
So Step 1 meant getting an environment that can build Docker Images, and
building from the
Dockerfile in the Git repo.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
The YAML above tells GitLab to use the ‘Docker’ Docker image, which contains just the Docker client program. It also says to run the ‘Docker In Docker’ service giving us access to a functional Docker daemon. The two environment variables are to tell the Docker client how to connect to the daemon and to use the overlay2 driver to speed things up.
GitLab’s CI can run an arbitrary number of jobs, which will be run in parallel. We only need one job though, since the image can’t be tested until its built, and it can’t be uploaded until its been tested.
I also found that multiple jobs may get run on different Runners (servers), but we need tests to be run on the same Runner which built the Image.
Helpfully GitLab provides a Registry and sets up access tokens. It then sets
environment variables with access details of the registry and a normalised tag name.
CI_COMMIT_REF_SLUG is the branch or tag name in Git but with any invalid
characters substituted for compatibility with Docker.
Testing the Image
Next up I needed to get tests running from within the Docker Image. This particular project was a Ruby on Rails app which meant things like database connections could be controlled from environment variables.
The hardest thing to figure out was the need to run your own MySQL instead of GitLab’s default service, and that when running the tests you’ll need it to be within the same GitLab Job as the build process.
1 2 3 4 5 6 7
First we run start the MySQL instance, this will pull down the MySQL Image automatically, then daemonise. In theory this is racy but in practical terms its always going to take longer to build the Image for our project, then MySQL will take to boot and start accepting connections.
The MySQL Container will create the database automatically on start, so I just
need to load the schema and any fixtures with
rake db:test:prepare then run the
One potential gotcha is making sure you install your testing Gems when building your Docker Image. I’d created these Dockerfiles myself with this use case in mind so they already were included.
Publishing the Image
If the build and tests are successful, then the final step is to publish the Image - I’m using GitLab’s Registry since they helpfully provide token access from the CI environment as Env Vars.
I only wanted to publish Images which are approved for release. Thats a simply
process of tagging within Git. Any tag be beginning with
release- is assumed
to be intended for release. Pushing that tag up to GitLab will result in the
the CI running as normal but the image also being published to the Registry.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
docker login was straight forward, but publishing only release tags was
more convoluted. This was because the Docker client image only has
ash as a
shell instead of the full
In short if
egrep spits anything out, then
docker push gets run, and if not
/bin/true is run to avoid the line being treated as a test failure by CI.
With the above 20 lines of YAML any developer with suitable access can publish a new fully tested release to the Registry ready for the SysAdmins to deploy.
Actually releasing just needs tagging a SHA in Git and pushing the tag upstream.
.gitlab-ci.yml file was largely identical for most projects. Some required
additional services like Redis but those were launched using standard Docker
Images, the same as MySQL.