Complex Pipelines With Spinnaker
Aug 22, 2016 by Isaac Mosquera
While Spinnaker is best known for it’s sophisticated cloud deployment strategies, Spinnaker is also very customizable and is suited to fit most types of workflows. The video demonstrates how to use Spinnaker to create a more complex pipeline. We’ll go step by step to show how the pipeline is used to deploy a simple browser “widget” package to the web in a controlled manner.
The Spinnaker pipeline is kick-offed automatically by a configurable trigger within Spinnaker. The available triggers are: CRON, Docker, Git, Jenkins or Pipeline.
CRON – Allows you to chose a CRON expression to run a pipeline. You can use this if you want to make regular release cycles to group a bunch of commits. i.e. release to website all changes for every hour
Docker – Executes a pipeline based on an docker image update in a repository.
Git – Based on Github commits/pushes, it’ll execute a pipeline. A good process to setup is a Github pull request and code review by a teammate before pushing to master and thus executing a pipeline.
Jenkins – Based on a Jenkins being completed. Typically you would have a jenkins job build an artifact then use Spinnaker pipelines to move the artifact throughThis is great if you want more control within Jenkins to kick-off the process.
Pipeline – Great if you want to chain 2 different pipelines together. Consider a back-end deployment that needs a corresponding change on the front-end.
Setup Paralleled Builds
In this step we will want to build the 4 different packages in parallel to save time. These are Jenkins jobs that can be executed with a different parameter to instruct Jenkins to build the appropriate package. Building them in parallel will make the deployments go much faster. At this time we’ll also have our unit-test suite run. If any test were to fail, this would stop the pipeline and wait for someone to fix the deployment.
Wait for Build
We now wait until all builds are completed. This stage is useful for anytime you have a task that is around for an indeterministic amount of time.
Once the packages are deployed, integration tests are run on specific browsers to make sure we didn’t actually break anything for the major browsers. While integration tests don’t cover all cases, we can use these tests to increase our confidence before we deploy to a small portion of users.
Wait for Integrations
We wait again here the integrations to complete because it’s an indeterministic period of time.
Once we have a good level of confidence of our packages then we deploy to production… but only to a small portion of our customer base. We can then wait and monitor system, application and most importantly business level metrics before releasing to a larger customer base. Tying your business level metrics to releases is critical to ensuring deployment success. Netflix has been known to use something as simple as “number of streaming views”. We’ve known customers to use “number of orders” or quite simple “revenue” as a metric. It really doesn’t matter as long as the business is kept in mind with your releases.
As we continue to learn about the behavior of our release we deploy to 50%, repeat the monitoring above and then to the rest of the population.
This demo was meant to demonstrate the flexibility of Spinnaker. This tool can be used to deploy more than just software packages to cloud servers, it can be used to deploy software to any physical or virtual device, including desktops or browsers
Until next time!
Here’s the Transcript:
Hey, how’s it going? This is Cloud Armory. And I wanted to show you guys a little bit more about how to do a complex pipeline within Spinnaker. Spinnaker is very, very flexible and customizable to whatever you need. It’s typically known for its Red/Black and canary deployments. However, it could be used to deploy packages to particular end points, just like URLs that could be downloaded. Here let’s assume that we have a widget, a generic widget that’s built for browsers and that could be implemented for browsers. It starts getting kicked off by some sort of Jenkins build execution or some sort of plugin or a check-in into GitHub.
The next step would be that it would build the Firefox version, the Chrome version, the IE version, and Safari version, all in parallel. So this doesn’t get blocked at the very beginning. It will then just wait for those builds to complete. And that will be a synchronous task. So it waits for all of these things to be built because you want them to be released and integrated at the same time. You don’t want maybe the Firefox version going out before the Chrome version or the IE version.
Once it’s done building these tasks… And these are just simple Jenkins tasks. You can put these Jenkins tasks together [into a – 01:25] pipeline that makes sense for your company or product. So then we wait for the build. Then we create the integration test for each particular build. And what it’s doing here is running an additional Jenkins task that will actually integrate the recent build into a particular browser, whatever browser is set up here, and run integration test for it. And really, what you’re trying to do is make sure that before you deploy this out to production, you’ve at least tested this in some environment that resembles your production environment. So here we have actual Chrome, Safari, IE, and Firefox browsers go up. We deploy the plugin into those browsers, and then we test them. And then this is where it gets a little bit more interesting and really super useful.
Now given that the integration test is only testing one environment or maybe a couple of environments within your company, obviously, there are a ton of different types of laptops and computers out there that are running Chrome, Firefox, IE with different versions. You want to make sure that you don’t deploy something that actually takes out somebody’s entire browser or causes some sort of detrimental response. So what we’re doing here is we package this up, we deploy it to a particular endpoint, and then only routing 10 percent of the traffic to these endpoints, using maybe something like cloud formation. And then what we’re doing in the meantime on the back end is using our metrics, things like Datadog, New Relic, Nagios, whatever you have for monitoring to make sure that the application-level metrics, your business metrics are not being affected by the recent change. So if for some reason your plugin does e-commerce checkouts, you want to make sure that the revenue associated with these plugins and this deployment version does not go down as a result of these code changes. So first we’ll just deploy it to 10 percent. And again, this is just a zip package that is deployed to some particular end point. As soon as we see that those results are good, we’ll then deploy it out to 50 percent of the population.
And once that is done, we’ll go ahead and deploy it to 100 percent. And meanwhile, the entire time, you’re just making sure that your application and your business-level metrics are not being affected by our deployment or your deployment here. And that’s the most important part, which is there are technical-level issues certainly around CPU and RAM. And we’re monitoring those two. The more important part is did we affect the behavior of our users in a negative way that ultimately affects or impacts our business in an unexpected and a negative way. And this will help reduce that impact if it is a negative impact. Any of these steps could potentially fail, and it would stop the deployment for all these resources.
That’s it for me right now. I’ll put up a blog post outlining these steps and how it works. And then also, I’ll show you how notifications work within Spinnaker in the next little video. Thanks.