What is “Immutable Infrastructure”?
Aug 24, 2016 by Armory
(And why’s it so important?)
Spinnaker believes in deploying to the cloud using immutable infrastructure. It takes a prescriptive stance around macro-level trends like the rise of devops, cloud adoption, docker, etc.
But what exactly is immutable infrastructure, and why does it matter so much?
The best way to answer that question is to imagine what the devops world will look like in 10 years. And one way to do that is to compare 10 years ago to today.
10 years ago, Amazon Web Services didn’t exist. If you wanted to have a website in production, at scale, you had to do this:
The average cost to deploy a production website at scale was $150,000 per month.
But today, that’s dropped by 100x, to just $1,500 per month, because now you don’t need to manage physical servers. They’re all “in the cloud,” meaning you can easily spin up, manage and destroy hardware with the click of a button, like this, in the AWS console:
The thing is, most organizations still deploy applications under the old paradigm. Let’s take a look at how that’s done today, and why:
In the first step, the build server of the organization (Jenkins, CircleCI, TravisCI etc.) is responsible for compiling the code and running unit tests. The result of this step is an application package. This covers the Continuous Integration (i.e. CI) part of the software lifecycle.
In the second step the application must be deployed. This is the Continuous Delivery part. In this step there is no unified tool for performing deployments. Most organizations use tools such as Puppet, Chef, or even completely custom scripts for deploying the application. Puppet, Chef and similar solutions were originally created for configuration management (e.g. handling automatic security updates for a machine). They can be used for deployments in an adhoc manner, but there is no unified way on how to do this in a standard way.
The key thing to note is that applications are deployed to existing infrastructure each time. That made sense when you had to colocate your own servers in a data center. You obviously couldn’t realistically remove a server and insert a new server each time you wanted to deploy a new application. So instead, you’d deploy your new application on your existing server — very much like putting a new layer of paint on a house, and with many of the same issues (“there’s lead in that base layer of paint from pre-1978!”).
The longer a server exists, the more opportunities there are for people to tinker with it, to change settings, to make the “prod” environment not exactly mirror the “stage” environment. It’s inevitable that over time, the hardware you’re using starts to degrade in ways that make deployments a scary, “I hope this doesn’t break anything” kind of affair.
But it doesn’t have to be this way. Welcome to the future, a world where hardware is abstracted to the cloud. A world of immutable infrastructure. Let’s explore what this means for deployments, and how deployment methodologies will change to take advantage of it:
In this new world, where servers live in the cloud, the process of spinning up a server is as easy as issuing a command to do so. “I’d like 20 XL sized boxes, asap, please!” And seconds later, Amazon Web Services, or Google Cloud Platform, or Microsoft Azure (or whatever cloud provider you’re using) has magically provisioned them and made them ready for your use. It’s a world that the operations teams of a decade ago could only have dreamt of.
The difference this makes to deployments is staggering. Just expand your mind for a minute and think: We no longer have to deploy applications on existing infrastructure. Now we can actually deploy the infrastructure itself every time we want to deploy a new application. This is what we’re referring to when we say “immutable infrastructure.” It’s as if, every time we want to deploy an application, we’re ripping the old servers out of the data center and replacing them with pristine, shiny brand new ones.
The major advantage of the cloud is of course the dynamic nature of servers. Since servers can come and go with the click of a button, we can employ deployment patterns that were simply not possible in the traditional datacenter.
The diagram above shows the blue/green deployment pattern. With blue color we see the existing servers before a deployment. When a new version of the application is created, we create on the fly a completely new set of servers (the green color) while keeping the old ones still around. Using the loadbalancer/router of the network infrastructure we can now redirect customers to use the new servers.
At this point we can make a decision. If the new version of the software works as expected, we can redirect all traffic to the new green servers and destroy the old blue ones. If for some reason, the new version has a critical issue, and a rollback is needed we can simply put the traffic back into the old servers via the router.
In both cases, because we always have both new and old servers into play, we can redirect traffic as we see fit with minimal or even zero downtime for all customers.
The important point here is the fact that the servers are always created from scratch. Each software version creates each own brand new set of servers. Once they are created, their configuration is fixed (and hence the term immutable infrastructure).
Here’s what deployments can look like now:
The first part (CI) is still the same as before. The second part however is now handled by Armory Spinnaker. Spinnaker performs Continuous Delivery using a unified API for all popular cloud providers (AWS, Google, Azure, Openstack etc).
The deployment strategy of blue/green server pools comes built-in with Spinnaker and thus no custom scripts or in-house development is needed in order to employ this powerful deployment technique for zero downtime software upgrades.
Let’s list some of the insane benefits that come with deploying applications under this new approach:
10 Years Ago | Today |
New applications were deployed to old servers, leading to unpredictable errors. | New servers are spun up for each new application deployment. |
Each deployment would overwrite the old code, making rolling back challenging to impossible. | The previous “prod” application servers can be kept around for a while until you’re confident the new application (running on the new servers) is solid, making roll backs easy. (This is known as “blue/green” deployments) |
Doing phased deployments is hard, because managing the traffic allocations to specific servers required a high level of sophistication. | The load balancers live in the cloud, making it easy to deploy your new application (on new servers) to, say, 5% of your audience for testing. (This is known as “canarying”) |
The benefits don’t end there — that’s just a sampling. To say they are game-changing for software deployments is just the start.
So why are companies still deploying the “old way”?
Change takes time. Often, companies will still deploy “applications” and not “infrastructure” when they move to the cloud, just like they used to when they had physical servers in data centers, just because that’s how they’re used to doing it. But that’s like driving a Ferrari, stuck in first gear. The cloud enables powerful new deployment methodologies, and we’re very confident that 10 years from now, all companies that are serious about software will be leveraging the benefits of immutable infrastructure.