If you’ve been following along you can continue with the code from part 2 or you can clone the code.
Clone the Repo:
Note: if cloning, you’ll need to run
npm install after grabbing the code.
Create a branch
Now let’s create a branch for today’s work.
Setting up our infrastructure
Much of today’s post is going to be outside of our code base and involve setting up the infrastructure pieces required to get the deployments going. We’ll be using Digital Ocean, so you’ll need an account on Digital Ocean, and be aware you will acquire some Digital Ocean fees (albeit minor) by following along with today’s post. Make sure to destroy your resources afterwards to avoid continued charges!
As a final note before we get going… we should probably be using Terraform or another “infrastructure as code solution” to provision our resources, but the Digital Ocean set-up is pretty simple so I decided instead of adding yet another tool to this series of posts I’d stick with a manual set up.
Creating the Kubernetes cluster
The first piece of infrastructure is our Kubernetes cluster.
From within Digital Ocean, select Create –> Clusters.
Create a cluster page, select appropriate settings; I’ve selected the region closest to me and a single node to keep costs at a minimum. I’ve also specified a cluster name, but leaving the default is fine as well.
After filling in the form click
Create Cluster. A
Getting Started dialog will appear, ignore this for now, just wait for the cluster to be created. Now it is time to create the Database.
Creating the Database cluster
From within Digital Ocean, select Create –> Databases.
Create a database cluster page, select PostgreSQL as the engine and an appropriate region. Optionally you can rename the cluster (I kept the default).
After the cluster has been created, navigate to the
Users & Databases tab. We need to add a user and database for our application.
Now move over to the
Settings tab and lock down access to the database by adding our Kubernetes cluster as a trusted source.
Accessing our Kubernetes cluster
Finally we need to be able to access our Kubernetes cluster.
Once installed, we can then use
doctl to download and save the configuration file for the cluster. See here for more information if you’re curious about what you can do with
And with that, we are ready to integrate with GitLab.
Integrating the cluster with GitLab
kubectl is installed, we’re ready to go.
Login to GitLab and navigate to Operations –> Kubernetes.
Add Kubernetes cluster and then select the
Add existing cluster tab.
Let’s fill in the form fields one by one:
Kubernetes cluster name
phx-gl-ci-cd-cluster to match the cluster name in Digital Ocean, but this isn’t necessary, you can use any name you want and the Digital Ocean and GitLab names do not need to match.
This is URL of our Digital Ocean Kubernetes cluster. We can retrieve this via:
We need to retrieve the certificate from the Digital Ocean cluster. First retrieve the default-token value.
Now run the below, replacing the
default-token value with the value from above.
Enter the full certificate including the BEGIN / END lines.
Create the following file outside of the project directory (or delete it after, as we don’t need it in source control):
Now apply the service account specified in the file to the cluster:
Finally retrieve the service token value.
The other fields we can leave as is.
The form should now look something like:
Add Kubernetes cluster.
You will now be presented with the cluster page:
Helm Tiller and once that completes install
Ingress has installed set the
Base domain based on the
Ingress Endpoint, and click
That’s it, we are done integrating, time to add a deployment to our pipeline!
Creating a Staging deployment
Back over in our code, the first step is to update the
We’ve added a new stage and local reference, in addition to a reference to the GitLab auto-deploy image. This image will be used in our deployment stages. Since we anticipate we’ll have more than just a staging deployment, i.e. we’ll also have a production deployment, we’ll define the reference in our main file instead of duplicating it in each deployment file.
Time to create the
The above is largely sourced from the GitLab Auto-DevOps yaml file. The main thing to be aware of is the variables section. As per the comments, the variables pre-pended with
K8S_SECRET will be passed to our container. This is how we specify the values required by our Docker image. In theory we could hard-code these values (and I’ve done so above with the
K8S_SECRET_PORT variable), but we don’t want to have to change our scripts to alter these values (for instance if for some reason our database host changes). Also some of these values are sensitive so we wouldn’t want to check them into source control. As a result we’re using variables to set the variables (lol)… and GitLab has us covered as to how to handle this.
Adding environment specific variables
Back in GitLab, navigate to Settings –> CI/CD.
We need to add variables for the 6 dynamic variables in the
deploy-staging.yml file. Once done we should have something that looks like:
The values for our variables are:
We can generate this via:
Everything else we can get from the Digital Ocean database cluster.
We can grab the STAGING_DB_HOST and STAGING_DB_PORT values from the overview section.
Then we can grab the STAGING_DB_INSTANCE, STAGING_DB_USER and STAGING_DB_PASSWORD values from the users and databases section.
With our new
yaml files and the variables in place, we should be good to go… let’s give it a try.
We should see a new stage and job in our pipeline:
And after a few minutes our jobs should all succeed:
Click on the job number of the staging job to see the details of the job. From the details we can get the URL of our staging deployment.
Let’s check it out:
Creating a Production deployment
We can now follow similar steps to create a production deployment.
Note: with a real application you might want to create a separate Kubernetes cluster for your different environments. This can be done in GitLab (Multiple Kubernetes clusters), but requires a premium account so we’re sticking with a single cluster.
Let’s add a new stage to the main
And then create the
This is almost exactly the same as
deploy-staging. A few differences:
- We only run this stage on pushes to master (via
- We’ve set a manual trigger for this stage (via
- We’re using different variables, instead of
$STAGING_, we’re pre-pending our variables with
We of course are going to need to create the production variables in GitLab as we did for the staging variables. And we also want to set up a production database instance, user and password on our Digital Ocean database cluster.
I’m not going to provide a walk-thru of the above as we’ve already done so when setting up staging.
However, once the new database and variables are in place, we can give it a go.
If we have a look at our pipeline, the production stage is not showing up. 😖… what is going on?
This is actually the expected behaviour, remember we specified that the production stage should only run on check-ins to master.
So let’s merge our code into our master branch.
When prompted for a merge comment you can leave it as is.
And now we can push to master.
We now see our new stage in the pipeline.
After a few minutes all our jobs will complete, but notice the
production job doesn’t run automatically.
Since we set this job to
manual, we need to… you guessed it, manually run it. After doing so we have a new “production” environment deployed.
So that’s it for this series of posts on GitLab. We’ve got a pretty good system set-up that allows us to test, build and deploy our code with relative ease.
Thanks for reading, hope you enjoyed the post!