Integrating Docker with Jenkins for continuous deployment of a Ruby on Rails application
For the past few weeks, I have been working on integrating Docker and Jenkins in order to improve the continuous integration workflow of a project I work on.
The application consists of the following packages and services:
-
Ruby on Rails application (called
ruby_app
) -
MySQL database
-
RabbitMQ messaging system
-
Apache Solr search platfrom
Here is a short description of the workflow that I wanted to have as an end result:
-
Jenkins builds all the Docker images from the provided Dockerfiles.
-
If the Docker images were built successfully, Jenkins runs containers from them, links them (
ruby_app
to MySQL, RabbitMQ, and Solr) and executes RSpecs forruby_app
. -
If the tests pass successfully, Jenkins tags and pushes the new
ruby_app
image to a private Docker repository, to be pulled for the staging and production servers. -
Staging server pulls the latest image for
ruby_app
from the private Docker repository, runs it and updates the Hipache proxy with the new container.
Jenkins
The first step is to install Jenkins on the CI server. You could either install it manually or run it from a Docker image. For our purposes, we decided to install it.
We then created a Jenkins “Job” for ruby_app
. Here is a gist of it:
#!/bin/bash
rm -rf docker-jenkins-ci-scripts
git clone https://github.com/nonsense/docker-jenkins-ci-scripts.git
cd docker-jenkins-ci-scripts
chmod +x *.sh
./build_ruby_app.sh
rc=$?
if [[ $rc != 0 ]] ; then
echo -e "Docker images build failed."
exit $rc
fi
echo -e "Docker images build passed successfully."
./run_tests_ruby_app.sh
rc=$?
if [[ $rc != 0 ]] ; then
echo -e "Tests failed."
exit $rc
fi
echo -e "Tests passed successfully. Pushing ruby_app image to local private repository."
docker tag ruby_app localhost:5000/ruby_app
docker push localhost:5000/ruby_app
echo -e "Tested image pushed successfully to local repository."
Basically we checkout a git repository, build all the images, run RSpec tests on ruby_app
, and push it to the repository if successful.
The build_ruby_app.sh
script goes along the lines of:
#!/bin/bash
# Tests if an image has already been built. Images like "mysql", "rabbitmq", "solr", etc. don't have to be re-build often.
function check_if_exists_and_build {
echo -e "Testing whether $1 image has been built? \n"
if docker images | grep -w $1
then
echo -e "$1 already exists. do not build. \n"
else
echo -e "$1 does not exists. building now... \n"
build $1
fi
}
# Builds a given image
function build {
rm -f docker-built-id
docker build -t $1 ./$1 \
| perl -pe '/Successfully built (\S+)/ && `echo -n $1 > docker-built-id`'
if [ ! -f docker-built-id ]; then
echo -e "No docker-built-id file found."
exit 1
else
echo -e "docker-built-id file found, so build was successful."
fi
rm -f docker-built-id
}
check_if_exists_and_build solr
check_if_exists_and_build mysql
check_if_exists_and_build rabbitmq
build ruby_app
Basically we check if an image has been built, and build it if it is missing. Images for MySQL, RabbitMQ, etc. do not have to be rebuild often, so we build them only once and make sure they are working. However since we are developing ruby_app
, we build it every time Jenkins runs.
The run_tests_ruby_app.sh
goes along the following lines:
#!/bin/bash
MYSQL=$(docker run -p 3306:3306 -name mysql -d mysql)
RABBITMQ=$(docker run -p 5672:5672 -p 15672:15672 -name rabbitmq -d rabbitmq)
SOLR=$(docker run -p 8983:8983 -name solr -d solr)
echo -e "Running tests for ruby_app... \n"
docker run -privileged -p 80 -p 443 -name ruby_app -link rabbitmq:rabbitmq -link mysql:mysql -link solr:solr -entrypoint="/opt/ci.sh" -t ruby_app | perl -pe '/Tests failed inside docker./ && `echo -n "Tests failed" > docker-tests-failed`'
if [ ! -f docker-tests-failed ]; then
echo -e "No docker-tests-failed file. Apparently tests passed."
else
echo -e "docker-tests-failed file found, so build failed."
rm docker-tests-failed
exit 1
fi
docker kill mysql rabbitmq solr ruby_app
docker rm mysql rabbitmq solr ruby_app
We run the ruby_app
container, linking it to MySQL, RabbitMQ and Solr, overwriting the default ENTRYPOINT
with the ci.sh
script. By default, ruby_app
executes rake db:migrate
, rake assets:precompile
and rails s
. However for the CI run, we execute rake db:migrate
and rspec
.
For some reason Docker does not return proper error codes if RSpec fails. Therefore I echo “Tests failed inside docker” inside ci.sh
, which I then detect from Jenkins.
#!/bin/bash
rake assets:precompile
rspec
return_code=$?
if [[ $return_code != 0 ]] ; then
echo -e "Tests failed inside docker."
exit $return_code
fi
The last step of the setup is to have the staging server detect when a new image is pushed to the local repository.
Staging server
For our purposes, I wrote a short bash script, which runs every 30min via cron, and checks whether a new image has been pushed to the private docker repository:
#!/bin/bash
REPO=188.226.XXX.XX:5000
CURRENT_IMAGE_ID=`docker images | grep -w ruby_app | awk '{ print $3 }'`
docker pull $REPO/ruby_app
NEW_IMAGE_ID=`docker images | grep -w ruby_app | awk '{ print $3 }'`
if [ "$CURRENT_IMAGE_ID" == "$NEW_IMAGE_ID" ]
then
echo -n "Image ids are equal. Therefore we have no new image."
else
echo -n "Image ids are not equal. Therefore we should stop old image and start new one."
docker kill ruby_app
docker rm ruby_app
docker run -privileged -p 80 -p 443 -name ruby_app -link rabbitmq:rabbitmq -link mysql:mysql -link solr:solr -volumes-from ruby_app_data -t ruby_app
fi
If a new image is detected, the old container is stopped and removed, and the new container is started. ruby_app_data
is a data-only container, allowing us to persist data when update the ruby_app
container.
However this results in a short downtime, while the app server inside ruby_app
is being started. Ideally we would have Hipache installed and running, and switch ruby_app
containers as soon as the new one is ready to process requests. This could easily be done via an application like Shipyard. Configuring an “Application” inside Shipyard (which uses Hipache in the background), resulted in errors for me:
(worker #96) staging: backend #0 reported an error ({"bytesParsed":0, "code":"HPE_INVALID_CONSTANT"}) while handling request for /
UPDATE: Because of this, I install manually Hipache with a Redis server on the staging server. I have the following bash script, which is run periodically by cron, and checks the health of the new container. As soon as the new container is up and responds to HTTP requests, it is loaded to Hipache:
#!/bin/bash
if [ -f new_container ]; then
echo -e "New application container has been started, but not loaded in Hipache yet. \n"
RESPONSE_CODE=`wget --no-check-certificate -S "https://$NEW_CONTAINER_GATEWAY:$NEW_CONTAINER_PORT_443/" 2>&1 | grep "HTTP/" | awk '{print $2}'`
if [ "$RESPONSE_CODE" == "200" ]
then
redis-cli rpop frontend:example.com
redis-cli rpush frontend:example.com https://$NEW_CONTAINER_GATEWAY:$NEW_CONTAINER_PORT_443
echo -e "Successfully pushed new container IP to Hipache's Redis \n"
else
echo -e "Response code is different to 200. \n"
fi
else
echo -e "No new container detected. Nothing to do. \n"
fi
Feel free to post any questions or improvements you might have. Source code shown above can be found at this repository in Github.
References
Continuous Delivery with Docker and Jenkins - part II
Using Docker To Run Ruby Rspec CI In Jenkins
Persistent volumes with Docker – Data-only container pattern