In part 1 of this series, we created a release and a docker image of our release. Part 2 saw us run through some manual one-time steps. We’re onto our final installment, and today we’ll finally get around to deploying our application!
As mentionned previously we’ll make use of Terraform for provisioning our infrasture, so you’ll need to have it installed in order to follow along. A few advantages of using Terraform:
- Our infrasture is defined in code so we can easily see what has been provisioned, something that is more difficult when defining things manually through the AWS console.
- Our infrasture can be easily replicated and expanded to multiple environments.
- Changes to our infrasture can be tracked in source control.
- Terraform supports multiple providers, so if we wanted to move to GCP for example, we would be dealing with the same scripting language.
We won’t be doing a comprehensive Terraform tutorial, prior knowledge of Terraform will be useful but is not necessary. The Terraform documentation is fairly extensive, so hopefully any points of confusion can be cleared up by referencing the docs.
So… let’s get going. The first thing we want to concentrate on is getting our basic scripts set up so we can easily handle different environments. To faciliate this we’ll start out by structuring our Terraform scripts around a
makefile and environment specific directories. Each environment (i.e. QA, Staging etc.) will have it’s own directory.
Setting up the initial Terraform scripts
We’ll start by creating a new directory for our scripts.
Next we’ll create some files and folders within this directory.
We’ll assume we’re going to be deploying a
QA environment for our application, thus we create the
qa directory under environments. Essentially anything specific to a particular environment will be placed under the appropriate
environment directory. Items that don’t vary per environment will be placed in the main
Let’s create a minimal starting point for our deployment.
Your directory should now look like:
We’ll start by implementing the
Makefile is how we’ll interact with Terraform. Essentially all it does is act as a wrapper around the Terraform commands. Prior to executing the desired Terraform command, it copies over the appropriate environment specific files. Simple!
Next let’s move onto our common Terraform files.
Terraform needs to know the
provider we are using. In this case we are deploying to
AWS so we specify
aws as the provider. For the
aws provider we also need to specify the region, instead of hard-coding this value, we’re specifying it with a variable, let’s define our variables next.
We define items in the
variables.tf file that we don’t want to hard-code within the scripts. These are values that we feel might change between projects or environments; the idea being that we are trying to make our scripts as generic as possible. The convention we are following is that the
variables.tf file is static and the values in it shouldn’t be changed. Any changes to the values of the variables are done via the environment specific variables file (
terraform.tfvars). At the top of
variables.tf we are defining some environment specific variables which by design don’t have defaults. These are values that will change between environments and as a result we want to require they be set in the environment specific files; this is accomplished by not providing a default for them in
With that in mind, let’s move onto our environment specific variables.
We’ve filled in our 5 required environment variables. The
ecr_image_uri can be retrieved from the ECR repository we set up in part 2.
The other values are retrieved from the AWS Secrets, also set up in part 2.
You’ll need to swap things out with your own values.
Moving onto the
backend.tf file, we won’t bother with a remote backend, but this is something that you would typically want to do. Without a remote backend the state of your infrasture is stored on your local computer. This is not ideal as it makes it difficult to share scripts / state with other people in your organization. Typically you would create a bucket on
S3 to store your state, we’ll fill in an example of what the backend configuration might look like, but leave it commented out.
We now have a minimal Terraform configuration, so let’s give it a go!
Terraform needs to authenticate against AWS in order to interact with it. One way of doing so is to export the
Access Key and
Secret values we downloaded as a
.csv in part 2. We can provide them as environment variables on the command line and they will be picked up by Terraform.
Now let’s throw a command at our
apply command is used to apply our changes.
Success! Since we have not yet defined any infrasture, nothing is getting created, but we can see our scripts are working.
Defining our infrasture
We’re now ready to start defining our infrasture! We’ll build things up iteratively, and run Terraform periodically to make sure things are working as we go along.
An overview of the infrasture we’ll be provisioing
We’ll discuss in more detail as we tackle each item, but the core pieces of infrasture we’ll be creating:
- A VPC: provides a virtual network for our application.
- Security Groups: configures access to various parts of our infrasture.
- Application Load Balancer: the load balancer for our application.
- RDS: a cloud based PostgreSQL instance.
- ECS / EC2: ECS is a container orchestration service which will launch our docker image into EC2… which itself provides virtual server infrasture.
The first piece of infrasture we will create is a Virtual Private Cloud (VPC). We won’t go into the details of what a VPC is, but basically it provides a virtual network for our application. You can read more about VPC’s here.
For the most part I find the default Terraform AWS modules work well, but in the case of creating VPC’s, it’s a little tricky / tedious so I use the VPC community module. Let’s see what it looks like.
Pretty straight forward, we specify the
source to be the community module. We’re then creating a simple VPC that basically just sets up our availability zones and subnets. We tag / name things based on our application and environment variables so if we view things in the AWS console it will be obvious what application and environment it applies to.
We using some new variables in
vpc.tf so let’s append the following to
We should now be able to create the VPC.
yes at the prompt we get:
Next we will create some security groups for our application. This will allow us to configure access to the server we’ll be provisioning as well as allow our application to interact with the database instance we’ll be creating.
We’re creating four security groups: one to allow access to our load balancer from the internet; one to allow the load balancer to access our application; one to allow us to SSH into our EC2 instances; and finally one which allows our application to access the RDS instance.
Depending on your use case, you might want to adjust these settings. For instance maybe you want to restrict which IPs have SSH access. Or perhaps when intially building out your infrasture you want the EC2 servers directly accessible via the internet, in order to test things without going thru the load balancer. Adjusting the
ingress settings would accomplish this. See the Terraform aws_security_group documentation for more details.
In any case, let’s apply the above settings.
Fantastic! Next let’s move onto the load balancer.
We don’t even need any new variables, let’s provision the load balancer (note: this will take a few minutes to provision).
Next we want to set up our database instance. Prior to doing so we need to add a Terraform script for retrieving the values we stored in AWS Secrets Manager however. We will set the database user and password based on the AWS Secrets values.
To retrieve the values we set up in part 2 is pretty simple, we just need to use the Terraform
aws_secretsmanager_secret_version module to provide a reference to the 3 values we want to pull out of Secrets Manager.
We’ll create a
secrets.tf file for this.
With the above we’re referencing the variables we set up in
With this in place, we can now move on to provisioning the database.
Let’s create a new Terraform file for defining our RDS infrasture via the aws_db_instance resource.
Nothing too complicated here. A few points to make note of:
- We’re using
jsondecodein combination with the items we set up in
secrets.tfto grab our database user and password values.
- We don’t want the database to be publically accessible so have set
publicly_accessibleto false. We use our
rds_sgsecurity group to provide our application access to the database.
We need to add a few more variables prior to applying the script.
Note: it could be argued the above values would be better off not having defaults; forcing them to be defined in the environment specific
terraform.tfvars file. For instance, maybe
db.t2.micro instances are used on
QA and other testing environments, where-as a
db.t2.medium instance is used on production. Deciding what should and shouldn’t have a default is a judgement call, and depends on the particular situation.
Let’s provision our database (again this will take some time).
Fantastic, we have a database! We are getting close to having a deployed application!
Prior to provisioning the EC2 instance and the ECS containerization service, we need to set up some AWS IAM permissions. These permissions will provide ECS the ability to launch and manage EC2 instances.
We’ll create an
iam.tf file for the permissions.
In the above, we’ve set up the required policies and roles, let’s apply them.
With these permissions in place, we are now ready for the final step, creating the ECS cluster and associated components.
ECS is what will handle deploying our docker image to EC2. In order to do this, we need to create an ECS Cluster, and then create an ECS service, which will be in charge of deploying the image to EC2. We also need to create a task definition, which describes our docker image, and also an auto-scaling group. Let’s create the full
ecs.tf file and then we’ll then walk through the various components.
We also need some more variables.
ecs_ami value below is specific to a region. So if you are deploying your infrasture to a region other than
us-east-1, you’ll need to specify the appropriate
ami (a listing of
ami's can be found here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html).
So this is kind of a lot, right! Let’s have a more detailed look. First off we are creating a cluster:
Nothing complicated here, we just specify the name for the cluster. Next comes a launch configuration and auto-scaling group:
The launch configuration is an instance configuration template which in turn is used by the auto scaling group to launch EC2 instances into our Cluster. Note we are applying our key pair from part 2 to the EC2 instance so that we’ll be able to SSH into the instance.
Finally we have the task defintion and ECS service.
The task definition is used to specify which docker image to run, along with some other items such as the CPU and memory that should be assigned to the image. This is also where we pass in the environment specific variables for the image. Some of these we are pulling from AWS Secrets Manager, others such as the database host we pull from the output of other Terraform scripts, an example of this is the database host value.
Note: In the
data "aws_ecs_task_definition" "task" section we have a
depends_on attribute. For the most part Terraform is smart enough to know in what order to apply components. Occasionally thou we need to provide a hint via
depends_on. In this case, without
depends_on the script will fail the first time we attempt to run it, as the task defintion will not have been created when Terraform runs:
data "aws_ecs_task_definition" "task". Adding the
depends_on ensures the task is created first.
Finally, the ECS service pulls things together, specifying which task to use, what cluster to use and what load balancer to use.
Let’s give it a go!
Our application is now deployed to AWS!
We now having a running instance of our application. Navigate to the EC2 dashboard in the AWS Console and select Target Groups. You should see a registered target showing up as healthy (the target might take a minute or two to show up after applying the scripts, and the target will briefly show as unhealthy).
Once a healthy target shows up, you can grab the URL for our application by selecting Load Balancers in the EC2 dashboard and grabbing the URL for the load balancer.
Throw the URL in your browser, and you should see our application running on AWS!
If wanting to SSH into the running EC2 instance, this can be done by selecting
Instances in the EC2 dashboard and selecting
You’ll see the key pair we created in part 2 has been applied to the EC2 instance.
Tearing it down
You’ll want to tear down the environment to avoid unnecessary AWS charges, this is simple to do, just run the
Done and dusted!
You will also want to delete the secrets you created in AWS Secrets manager as there is a small montly fee associated with each secret you store.
Hopefully this serves as a decent jumping off point for deploying a Phoenix application to AWS. For a real deployment you would of course want to create a domain,
DNS entries, use
SSL etc. but hopefully the basics have been covered.
A final note on Terraform… I find working iteratively is an effective way of building things out. However, once everything is running, it is always a good idea to tear everything down and then re-create it. This ensures that you haven’t missing any dependencies / synchronization issues that will trip up the scripts when they are run in one go. Any dependency issues can usually be resolved with an explicit dependency via the
depends_on directive… as we did with the
Thanks for reading and I hope you enjoyed the post!