In part 1 of our series on Machine Factories with Windows, we looked at how we can automate our development environments with Vagrant, Packer and DSC.
In part 2, we explore the next step in our continuous delivery pipeline; building the AWS images required for our build servers and runtime environments – such as Production. In part 3 we’ll run these environments and launch them in a stack using Terraform.
At the end of it, we’ll have the following:
- A build script and installable package format for our URL shortener application
- A Build Agent AMI that has all of the dependencies required to build and test the application (probably to be used to host the CI server)
- An Application AMI ready to be started as part of a Production stack
Important: I assume you are comfortable with basic AWS concepts and practices, such as using the console and API to do things like launch instances into subnets/VPCs, and have a set of credentials ready and configured in your local environment.
Often times, particularly when building microservices, you’ll note commonality amongst server requirements – CM tools, log forwarders, security settings and so on. Rather than repeat this provisioning step for every image, one strategy you can employ to speed things up is to create an intermediate image or a ‘Base’ image with these things on it. In our case, we’re going to keep things simple and simply ensure that IIS, DSC and MongoDB are installed. These actually do take 10-15 minutes to install so it’s well worth doing now (of course in real life we wouldn’t be installing MongoDB on every web server, but for our purposes this is fine).
Our Base Packer configuration and provisioning script look like this:
Note that we run an Ec2Config.ps1 script at the end; this guy is responsible for re-enabling the ability to add user-data to the next run, to be able to set a password and also the computer name. This is really handy if you plan to create images in pipelines as we do. You can read more about the Ec2Config service here.
We can now run the build:
We’re going to need a server that continuously integrates our code (for CI) and contains the tools required to continuously deploy it (for CD). Whilst the actual CI/CD tool selection & implementation is an exercise left to the user, we will still create a Windows image that contains a super-set of dependencies to our Production image so that we have increased confidence code will work when shipped from one to the other.
It’s also important to be in a position to re-create this system rapidly if things go pear shaped; if we lose our build server our path to Production is essentially blocked – a state of emergency in CD circles – and we sure as hell don’t want to have to build this stuff by hand.
Our build agent Packer configuration and provisioning scripts looks like this, note that we are installing Ruby, CfnDsl and Terraform to deploy our stack later on:
Let’s create that server now:
Typically, we would deploy a microservice such as a Chocolatey Nuget package and have it install itself into its target environment. That’s worth taking a second to digest. Chocolatey provides us with the ability to declaratively express our application dependencies, and the capability to create custom install scripts once all dependencies have been resolved and installed, which is where we are employing a small trick – using DSC within the package install process to configure the server. It’s a bit sneaky, but the result is a really clean and almost native approach to installing our application into a server.
Our Chocolatey install script looks like this:
Please excuse the hack at the top, this is a side-effect of the latest Chocolatey bundle which runs as the Invariant Culture which in turn impacts the i18n component of DSC. The rest of the process should, however, look fairly straightforward – it is simply running the same DSC scripts that we were applying in part 1 using Vagrant, and pointing IIS at the location the package was installed. That’s it!
Now, we need a build and packaging process to tie it back together with Packer. To achieve automation, we really need to get away from clicking stuff and leverage the power of scripting – for the task of building and packaging our application, we have chosen Fake, which works cross-platform and has a bunch of neat helpers for things like packaging and file manipulation:
Run it with:
./build.sh on *nix or ./build.bat on Windows.
Once finished, you should have a bunch of artifacts in the ./publish directory, including source.zip which contains our package along with its dependencies (themselves, also Chocolatey Nuget packages). This means when it comes time to install the package we don’t need to reach out to package repositories and can simply upload the package, unzip it and run chocolatey install, using the target directory as the package source for all packages to be installed – we’ll see this in the next section.
Application Image aka ‘Bake’
So now we have all the ingredients required to create our distributable image – the Application image. This is what we will use to create our Production stack, as the image for auto-scaling groups and so on.
As Warner discusses in our talk, our bake recipe takes the following ingredients:
- 1 x Base Image
- 1 x Application
- 0 x Configuration
Why no configuration? Embedding configuration into our image at this point confines our AMI to a very specific life, for the environment that your are targeting – the result is images specific to ‘staging’, ‘dev’, ‘performance’ and the list goes on.
What we really want to do at this step is create an image that can be run in any context – we want to put the image into ‘stasis’ for ‘re-animation’ some time in the future. We’ll discuss strategies for re-animation in part 3.
For now, we are simply going to bake this image – here is our application Packer configuration:
Let’s build our server, shall we?
Note that all it does is spin up our Base image, upload and install our package, run the EC2Config service and shut down, leaving us with a shiny new and runnable AMI id.
To the cloud!
We are now ready for the final installment of the series – deploying to the cloud!