I recently had the opportunity to set-up a Phoenix CI / CD pipeline on GitLab. I was super impressed with how easy it was to get everything up and running. In this post we’ll look at how to go about compiling our code and running our tests on GitLab. In subsequent posts, we’ll deploy our application via the GitLab Kubernetes integration. Exciting stuff! Let’s get at it!
We’ll start off with an existing Phoenix application, so the first step is to clone the repo:
Now let’s create a branch for today’s work.
And let’s run our existing application to see what we are working with; first we’ll need to install dependencies.
… then assets…
And now we need to create the database… Note: before running
mix ecto.setup you may need to update the username / password database settings in
dev.config to match your local postgres settings, i.e.
With that all out of the way, let’s fire things up.
If we navigate to http://localhost:4000/ we’ll see our application.
Nothing fancy, just the standard project you get when running
mix phx.new. I’ve also added some simple scaffolding so we have some database related tests and functionality to run against GitLab.
Let’s move on to setting up continuous integration on GitLab.
Adding CI to GitLab
Setting up a CI pipeline on GitLab is dead simple. GitLab will look for a
.gitlab-ci.yml file in the root of the project; and then create / run a pipeline based on the contents of this file.
So as a first step, let’s see if we can get GitLab to build our code.
We’ll create both the
.gitlab-ci.yml file mentionned above and also a
ci directory. We’ll keep the
.gitlab-ci.yml file pretty sparse, calling into files we’ll place in the
ci directory. I find this keeps things a little more organized versus having one huge
Let’s create our files and folders.
We’ll start with
Super simple, we indicate the stages (currently just
build) of our pipeline via the
stages section, and then include a reference to our
build.yml file. Let’s fill in
build.yml file is also pretty simple. We indicate this is part of the build stage via the
stage: build line. We then indicate the image for our build.
We’ve added a
cache section to speed up subsequent runs of the pipeline. When possible GitLab will use the cached dependencies and build instead of building everything from scratch.
script section is where we specify what commands we want GitLab to execute. First off we need to ensure
rebar are available (thanks to Dan Ivovich’s excellent post for flagging this up), as neither are included in the cache. Next we grab our dependencies via
mix deps.get, and finally run
mix compile, passing in the
--warnings-as-errors flag as we don’t want our pipeline to pass if we have compiler warnings.
Let’s test things out by pushing to GitLab:
Since we now have a
.gitlab-ci.yml file in our project GitLab picks this up, and when we navigate to
CI/CD, Jobs in GitLab, we see our build stage running the
compile job we specified in
Refreshing the page after a few minutes will show the job as passing. Note, you can also view the details of a job by clicking the job number.
If we re-run the job, we can see our caching seems to be doing the trick:
Our build stage looks to be all good, let’s move onto the test stage.
We’ll perform both testing and linting in this stage. Let’s start by setting up testing.
Setting up tests
The first step is to add a new stage and local reference in
Simple, now let’s create
Ok, a little bit more is going on here. We set the stage to
test and specify that GitLab needs to use the
postgres service. This provides a database to run our tests against.
We’re also setting some variables in the
variables section for the database configuration. We’ll need to update
test.config to make use of these.
Similar to the build stage, we use the cache in order to avoid re-building everything.
Finally, in the
script section we set up our database (
mix ecto.setup) and then run our tests via excoveralls.
Before testing this out on GitLab, we’re going to need to set up
coveralls in our project. Before dealing with
coveralls, let’s first get the test configuration changes out of the way.
Update the test config
All we’ve done is replace the hard-coded database configuration values with environment variables that default back to the original hard-coded values. GitLab will provide the appropriate environment variables during the
Let’s move onto getting
coveralls installed. We need to update both the
deps section our
Now we need to get the new dependency.
We should now be able to run our test coverage.
Looking good, but let’s add some
skip_files section ignores any files we don’t expect to write tests against, and which as a result, we don’t want counting against our coverage percentage. In the
coverage_options we specify the
coveralls task will fail if we don’t have at least 90% test coverage.
With our newly ignored files, if we run
coveralls again, we’ll see we are well within our coverage boundary.
Great! So let’s do a push to GitLab and see what happens.
Back in GitLab, if you refresh the page after a few minutes, you’ll see our
test stage / job.
Viewing the pipeline in GitLab we’ll also see the test stage has been added to the pipeline.
So this is pretty fantastic, we’re already in a pretty good spot in terms of our continuous integration set-up.
As a final step for today, let’s add some linting to the test stage.
Setting up linting
We’re going to use both
mix format and credo for linting.
Let’s see if we currently have any formatting issues:
Looks like we do, so we’ll run
mix format to resolve those.
Now let’s install credo.
Add the credo dependency to
And grab the dependency.
We’ll add a config file for credo while we are at it.
Pretty self explanatory.
credo and see if we need to make any updates to our code.
Looks like we have a missing
moduledoc tag. Let’s leave this for now, so we see an example of our pipeline failing on a push to GitLab.
credo configured, all that remains is to add a lint job to
Pretty simple, the job just runs
mix format and then
Let’s see what happens when we push to GitLab.
Once our jobs complete we see:
We also receive an email which contains details of the failed job.
And of course, we can also see the detailed output of a job by clicking the job number in GitLab.
In order for our pipeline to get back to a passing state, we need to add a
moduledoc tag to
product.ex. I’ll leave that to you as an exercise if you wish.
With a pretty minimal amount of effort we’ve managed to set-up the “CI” portion of our pipeline. Pretty sweet!
Thanks for reading, hope you enjoyed the post!