Creating your own Spinnaker custom stage
Oct 22, 2020 by Ryan Pei
Spinnaker offers many different stage types by default, but there’s always more that we would like to see in the pipeline. Spinnaker is a powerful tool not just for deployments but also orchestrating pre and post-deploy tasks, such a provisioning supporting infrastructure and rolling back a deploy based on APM feedback.
This article is structured like a tutorial to walk you through how to make custom stages, one of the most common customizations used in Spinnaker. If you have any questions about any of this, please find me or my team in our Spinnaker community Slack.
Getting started with an extensible Spinnaker instance
If you have not used Spinnaker before but you know your goal is to add a custom stage, you’ll find this section helpful. Or if you are a Spinnaker user, you may not yet be familiar with setting up Spinnaker in a way that you can extend or customize it.
Run a dev instance of Spinnaker
To get started, you’ll want to run Spinnaker somewhere. Minnaker, a single VM instance that runs Spinnaker on a k3s cluster, is fantastic for first-timers. I run minnaker either locally on my laptop (I recommend a fairly beefy machine) or on EC2. Once it’s running, there’s a couple easy tutorials to get acquainted with Spinnaker’s basics.
You can also familiarize yourself with the different services that comprise Spinnaker. If you look at what’s running on your minnaker’s k3s cluster, you’ll see pods for halyard, orca, clouddriver, etc. If you want to change any configuration on your minnaker instance, halyard is useful for this, as it’s one of the primary CLI tools for installing and configuring Spinnaker. Try changing some configuration on minnaker via halyard (remember to
hal deploy apply to see changes take effect).
The other Spinnaker services are worth learning more about, too. Orca, Clouddriver, Deck, Gate, and Kayenta are some of the key services, with Orca and Deck probably being the most relevant to custom stages, as you’ll learn about later in this tutorial. You can also try using more of the various stages that exist in Spinnaker.
Familiar with Jenkins? Run a Jenkins job
If you use Jenkins, you may want to try a Jenkins job or a Script job. These stage types enable Spinnaker users to extend Spinnaker’s functionality through Jenkins. One downside here is that this creates a dependency on Jenkins, but if that’s readily available then this may suffice for you.
Run a container image in Kubernetes
For a multi-platform deploy tool, Spinnaker has a very Kubernetes-native feel. An example of this is the Run Job (Manifest) stage. In this stage’s configuration, you can execute an arbitrary job in your pipeline via a Kubernetes job manifest that runs a Docker image.
Try using this stage in a pipeline; it’s a good introduction to the concept of custom stages in Spinnaker.
Creating your first custom stage
Now that you are a little more familiar with Spinnaker, you can add new stage type that doesn’t yet exist. We’ll now go over the various ways to do that.
Custom run jobs
First try one of the most popular ways of adding a custom stage: a custom run job (or just “custom job”). Once you know how to run a Kubernetes job, this should be easy. The benefit to this over the regular run job as described above is that this stage becomes reusable throughout the entire Spinnaker deployment, and you won’t need to define the job’s YML every time you want this job. There’s no real coding required here; just an update to Orca’s configuration (Orca is the orchestration service in Spinnaker responsible for executing stages). This example in the Spinnaker docs shows how user-entered parameters are passed to the job and how the manifest YML of the K8s job is defined in Orca’s config.
Perhaps you saw the Webhook stage? Another easy way to add a custom stage is via a custom webhook. Again, no coding required, just configuration on the Orca service. Where this comes in handy is for cases where you want to make API calls to an external system, like GitHub, Jira, etc. Note that this isn’t how you typically interface with infrastructure-level APIs, like Kubernetes or AWS (that’s usually the responsibility of another Spinnaker service, Clouddriver, which we’ll revisit when discussing plugins). This is primarily for API interactions with SDLC-focused tools.
When configuring this custom stage, your end-user can specify parameters which are in turn passed along in the API call.
The next level of integration beyond a custom run job or webhook is a plugin. The plugin framework, introduced earlier this year, is there for the community to build customizations that are re-usable and sharable with the rest of the community. While creating a plugin is a little more effort, there’s a couple reasons why you might opt for a plugin:
- You want this custom stage to be usable for a wider audience amongst the Spinnaker community. Plugins are meant to be shared, whereas the custom stage types described earlier are intended for one or just a few installs. A plugin is easier to consume and configure than a custom job.
- The parameters you require for your custom stage are more complex than just simple text fields. For example, you don’t want a user to enter a password or secret key value in plain text in the UI. Or maybe you need a drop-down menu. Or you want the field’s default value to come from an API call to an external system.
- This custom stage might also be a part of a more extensive integration. For example, a cloud provider can be built as a plugin. Cloud provider plugins include functionality in Clouddriver (which I mentioned earlier as the service doing all the infrastructure-level interactions), along with custom stages. A plugin allows for packaging all these parts together.
Introducing a custom stage plugin
To become more familiar with plugins, try simply converting your custom run job you created earlier into a plugin. A great example of this type of plugin is this one by Pulumi. You can see the custom run job manifest in this file; everything else is essentially the plugin wrapper around this job. The plugin is a gradle project written in Kotlin (don’t worry, you can implement all of this in Java if you prefer).
The part of this plugin that hooks into Spinnaker is the
PulumiPreConfiguredStage class here. This class extends an extension point in the Spinnaker project, specifically
PreconfiguredJobConfigurationProvider in Orca. Extension points form the API of Spinnaker’s plugin framework (more on the architecture of this framework). Most plugins extend these extension points. When this plugin is added to Spinnaker, this code is recognized as an implementation of an Orca extension point and injected into the Orca service.
You can see
PulumiPreConfiguredStage takes values from
PluginConfig and passes them to the Kubernetes job. Unlike the non-plugin custom job, a plugin is configured at Spinnaker’s top-level configuration, which is usually via halyard/kleat or the Kubernetes operator, instead of the Orca service’s local config.
The config in halyard for a plugin looks something like this:
spinnaker: extensibility: plugins: armory.example: enabled: true config: pluginConfig: someValue myConfig: myValue
The plugin config is passed to whichever class in the plugin has the
@PluginConfiguration annotation with the
Plugins are organized in Spinnaker by plugin repos (not to be confused with the git repo that may host your plugin source code). The plugin repo is a file specifying a set of plugins which can be added. You can see an example of this
repositories.json file here, which in turn points to a list of plugins at
Your plugin development environment
There are several ways to setup your plugin development environment. Which method you choose depends mostly on how much RAM your dev environment has and how complicated your plugin might be.
Developing on minnaker:
One option is to reuse your minnaker instance. The advantage here is that you then don’t need to setup each Spinnaker service you need; everything is already running in minnaker, including redis and S3 (minio). However minnaker requires 8-10 GB of memory, and you also need to account for your IDE like IntelliJ for development work. So if you’re running minnaker locally, I recommend at least 4 CPU and 16 GB of memory, preferably 32 GB. Some instructions on how to run minnaker locally via a multipass VM for a plugin dev environment are here.
You can also run minnaker remotely. This requires a tool like Telepresence to connect a locally running instance of say Orca to a remote minnaker/Spinnaker instance. Check out these steps for how to do that.
Developing without minnaker:
If you prefer, you can also run just the specific Spinnaker services you need. For a custom stage, you’ll at least need Orca. And most Spinnaker services depend on Front50, which fronts a Redis or SQL datastore, plus S3 (I often use minio). Deck serves as Spinnaker’s UI, and it in turn relies on Gate, Spinnaker’s main API gateway. If your plugin interacts with infrastructure level resources, like AWS services or K8s CRDs, there will also likely be a Clouddriver component. You also need Echo, Spinnaker’s events bus, if you want to test your stage in an end-to-end pipeline. Altogether, you likely want to run these services and their dependencies:
- a Redis or SQL datastore
- a S3 store
- Clouddriver (optional, if you need to manipulate resources at the infrastructure layer)
- Echo (optional, if you need to run a pipeline for testing purposes)
The other services are typically not required, but you can learn more about what each service does here in case you might need something else.
Where you run the plugin itself
Once you have the services that your plugin needs up and running, you can run the actual plugin you’re developing. To see how a plugin can be be applied to a service like Orca or Deck, which you probably have running in your IDE of choice, checkout these instructions starting at this “Build the plugin” section here.
More advanced plugins
Now we can explore more advanced custom stages. First we’ll look at two more extension points:
In another example plugin, pf4jStagePlugin, we can see in the main class of the plugin that it extends
Task for implementations of a new stage and tasks. The
StageDefinitionBuilder implementation defines a
taskGraph function, which is a collection of this particular stage’s tasks. Tasks are what are actually executed as part of a stage run. Your custom stage can include multiple jobs (tasks), and these tasks are re-runnable.
RandomWaitStage here, which has a configuration and some execution details defined. The config determines how a user configures this stage. This is why these plugins are pretty powerful from a UI perspective. You can manipulate the UI for your stage however you want, and add things like validators for the fields, drop-down fields, etc. The execution part is where you can provide information to users about the actual execution of this stage.
So now you know how to create a plugin that adds UI and takes advantage of more extension points to create a more “native” feel to your custom stage.
Where do plugins end up? Where do people find them?
Plugins typically end up in your team’s own plugin repo. Armory and Netflix each have a number of plugins in our own repos, like this one. There’s also a Spinnaker community spot for plugins (at the time this article was written, this is brand new with a single plugin – but there will be many more soon!). It’s up to you if you’d like to keep your plugin private, public but part of your own project, or if you’d like to donate it to the Spinnaker community where others can most easily find it and use it.
So now you know about custom stages. Hope you found this tutorial useful! Later I’ll also link videos in here from this month’s Spinnaker Summit describing some of these concepts in even more detail.