In part 3, we complete the final step in our continuous delivery pipeline; deploying the stack to a Production AWS environment, using Terraform.
At the end of it, we’ll have a a Terraform setup that can create our application stack in an HA configuration, complete with auto-scaling groups.
Important: I assume you are comfortable with basic AWS concepts and practices, such as using the console and API to do things like launch instances into subnets/VPCs, and have a set of credentials ready and configured in your local environment.
Before we’re done, we’re going to terraform an environment in AWS that looks like this:
Whilst Terraform is capable of creating the entire setup, the components we expect to be in place for the purposes of this tutorial are A VPC with at least 2 subnets available in different AZ’s. We will build out the other components with Terraform, including:
- An Elastic Load Balancer (ELB) to load balance across 2 Availability Zones (AZ) and configure its health check
- An auto-scaling group (ASG) with its corresponding
- Security groups for our ELB and instances (via the ASG launch configuration)
So what is Terraform? Apart from having possibly the coolest name of a cloud tool ever, Terraform is yet another Hashicorp project for managing infrastructure across cloud providers such as Amazon, Digital Ocean, OpenStack and so on. Whilst it is not as feature rich (yet) as Amazon’s CloudFormation, the configuration files are much nicer, it is cross-provider, super-fast and of course it is both free and open-sourced, so you are free to contribute back to the product and the community.
In hindsight, this series could have also been aptly named “Machine Factories brought to you by Hashicorp”.
Terrform uses a structured JSON-esque DSL to describe the environments it creates, and is quite readable. It uses sensible templating so that you can pass in user input (as variables) to many of the resources and also allows you to refer to other resources dynamically both in and out of the current configuration, allowing quite complex setups to be reasoned about and configured sensibly.
We shall use Terraform to create our immutable infrastructure. What this means is that we don’t want long-lived environments that we need to patch, maintain, nurse and so on and risk configuration drift let alone other management requirements that come with this mindset. Instead we want to always create anew and tear down the old for each and every deployment (see pets vs cattle for more background).
This is great, as simplifies many things – do we care what state a production server is? Not really, the next deployment we know they will all come back into line. Do we need a complex and custom application deployment process? No, we deploy our infrastructure in a repeatable way. It also forces us to automate and script everything – immutable infrastructure is impossible to achieve without practicing infrastructure-as-code. This means storing all configuration in source control, unit testing where appropriate and all of the engineering disciplines you would follow when building the app.
So with that out of the way, here is our Terraform configuration:
Hopefully you can see a 1-to-1 mapping between the four components we need to build, and the resources inside of the Terraform configuration. Notice in the launch configuration, we use a variable to accept the AMI ID of our Application Image – this is the simplest strategy to make templates re-usable and can be passed in to the Terraform CLI when executing using the –var option.
You can ask Terraform to tell you what it is going to do before it goes of and does it, using the terraform plan command. Another neat feature is that you can output a graph of resources it is going to produce, which highlights the relationships amongst them (NOTE: you will need Graphviz/dotty installed):
You can see how this would be useful as you start to build more complicated ecosystems.
Now that we know what is going to happen, let’s run it!
It’s as simple as that. Terraform will now run a number of AWS commands to create all of the resources required to run our stack, and in about 4-5 minutes (Windows boxes take about 4 minutes to spin up in AWS) we’ll have a running environment – how awesome is that!
You can now query the ELB A Record from the Web console or CLI and visit the site. It should look something like this: machine-factory-tutorial-12345678.us-east-1.elb.amazonws.com.
In practice, you would create a human-readable CName record that points to this A Record – something like myawesomeurlshortener.com should do the trick.
Each time a build is finished, producing a new AMI, you can run terraform apply passing in the new AMI id and you will get an updated environment running the latest code base. Terraform notes that things have changed automatically and applies the necessary updates (note: this will cause down time, see below for strategies to work around this).
When you want to remove all of these resources, you can run terraform destroy to remove all resources:
Any astute reader will note several flaws in our deployment process – the most notable being that a Terraform deployment will result in a Production outage whilst it replaces the running instances with the new AMI. Furthermore, we need to tie all of these steps together, a job for a CD server like Bamboo, Go and so on.
The general deployment process involves what we call a blue/green deployment – an entirely new stack is spun up in parallel to the running one, possibly warmed up with traffic and when considered healthy traffic is migrated from the blue (live) to the green (new) environment. If things go pear-shaped, you migrate back. You could see how this is possible with Terraform and Route 53 weighted record sets – create another stack side-by-side with the existing environment, slowly modify the weightings of the CName from the ELB of the blue stack to the ELB of the green stack. When things are done, terraform destroy the blue stack and you’re done.
These things are exercises left to the reader, who hopefully now has their interest piqued and has a better understanding of some of the fundamental concepts in building out a continuous delivery pipeline on Windows.
The end of our continuous delivery journey
So that concludes the series on Machine Factories with Windows. In the three episodes we learned of a number of important tools and concepts in applying continuous delivery practices in AWS on Windows, along with practical advice on how to:
- Create Vagrant boxes using Packer to simulate a Production ecosystem in our local development environments
- Provision Amazon AMIs using Packer, and strategies to speed up the machine pipeline
- Deploy our infrastructure – including our pre-baked AMIs – to Production using Terraform
You now have the building blocks for creating a CD pipeline on what once was a daunting task – automating Windows – so what are you waiting for?!
I hope this has been helpful, feel free to ping me on Twitter or leave a comment below, I would love to have a conversation with you.