Skip to main content

Announcing Spinnaker Evaluate Artifacts Stage

Announcing Spinnaker Evaluate Artifacts Stage hero graphic

Apr 9, 2021 by Stephen Atwell

Armory’s new Evaluate Artifacts stage allows you to both create an artifact from within a pipeline and to inject Spinnaker parameters into any artifact. Only certain deployment stages, such as deploying a Kubernetes manifest, support using Spring Expression Language (SpEL) to reference parameters out of the box. Other stages, such as the Terraform Integration stage, lack this stage-specific SpEL support. This blog explores how the new ‘Evaluate Artifacts’ stage can be leveraged to inject Spinnaker parameters into your Terraform deployment pipeline.

The pipeline we're going to configure

The pipeline we’re going to configure

The Terraform Script

Before creating our pipeline in Spinnaker, we need to define the Terraform script that we want to deploy. In this case, it is a simple script that deploys an NGINX container. This script contains 3 parameters: the namespace, the deployment name, and the number of replicas. These parameters are what we will also define in the Spinnaker parameters, which are then used for the new Evaluate Artifacts stage. Here is the script I’m deploying:

variable "namespace" {
type = string
}
variable "deployName" {
type = string
}
variable "replicas" {
type = number
}
resource "kubernetes_namespace" "test" {
metadata {
name = var.namespace
}
}
resource "kubernetes_deployment" "test" {
metadata {
name = var.deployName
namespace = kubernetes_namespace.test.metadata.0.name
}
spec {
replicas = var.replicas
selector {
match_labels = {
app = "MyTestApp"
}
}
template {
metadata {
labels = {
app = "MyTestApp"
}
}
spec {
container {
image = "nginx"
name = "nginx-container"
port {
container_port = 80
}
}
}
}
}
}

Configuring the Spinnaker Pipeline

Our Spinnaker pipeline is going to take some parameters from the user. It leverages Armory’s Evaluate Artifacts stage to encode these parameters into Terraform artifacts. Then, it uses Armory’s Terraform stage to create an execution plan and deploy the infrastructure using this plan.

Parameters

To create my Spinnaker pipeline, I start by adding two parameters. The first is called ‘nameAndSpace’, and it is intended to receive JSON as input. This is useful when you are triggering a pipeline from an external system and want to simplify passing parameters from that system. The second parameter is called ‘replicas’ and is passed as a single value, which is more convenient when humans are manually invoking the pipeline and entering in parameters.

Evaluate Artifacts

In my pipeline, I add an ‘Evaluate Artifacts’ stage. I’m going to use this stage to create a Terraform variable file. In the stage, I add a new artifact with the following payload and name it ‘testvariables.tfVar’

namespace="${#readJson(parameters['nameAndSpace'])['space']}"
deployName="${#readJson(parameters['nameAndSpace'])['name']}"
replicas=${parameters.replicas}

This payload creates an artifact with the correct format for Terraform variables. It contains 3 variables: the first two are read from the JSON that we feed into the ‘nameAndSpace’ parameter, and the third is read directly from the ‘replicas’ parameter. Since replicas is of type integer, it does not need quotes. These are all SpEL expressions.

the configured evaluate variables step that leverages SpEL to read spinnaker parameters and store them in an artifact.

Now, we’ll add a second artifact that contains the Terraform script provided earlier as it’s payload. We’ll name this artifact ‘main.tf’.

We could combine both of these artifacts into a single artifact here by directly using SpEL within this second artifact instead of having a separate variables file. However, many companies store their Terraform scripts in source control systems like git. With two files, this second artifact can be seamlessly moved to git while keeping only the Terraform variables within Spinnaker, if so desired.

Terraform Plan

The Terraform plan stage takes the Terraform script, combines it with the Terraform variables file, and builds a Terraform execution plan. This execution plan gets stored in another artifact for later use.

The configuration of the Terraform 'Plan' stage

To configure the stage, add a new ‘Terraform’ stage with an action of ‘Plan’. We select the ‘main.tf’ artifact as the ‘Main Terraform artifact’ and also specify the ‘testvariables.tfvar’ artifact as a ‘Terraform Artifact’. Under ‘Produces Artifacts’ we add a new artifact named ‘planfile’ of type ’embedded artifact’. This is where the stage stores the Terraform execution plan.

Manual Judgment

One of the powers of Terraform is that you can review its execution plan before executing it. The Manual Judgment stage gives users a chance to review the output of Terraform plan and decide whether or not they wish to apply it. Many companies do manual review of execution plans before deploying updates to production environments. We do not need to perform any extra configuration on this stage.

Terraform Apply

This stage applies the Terraform execution plan. The stage is of type ‘Terraform’, and its action is ‘Apply’. For the main Terraform artifact we specify ‘main.tf’. We also pass the planfile in as an extra Terraform artifact so that the stage uses our execution plan.

The configuration of the Terraform Apply Stage.

Running the Pipeline

To run the pipeline we will specify the following parameters:

nameAndSpace: {"name":"test-deployment","space":"test-space-param"}

replicas: 2

When the pipeline runs, the Evaluate Artifacts stage creates our two artifacts. It converts the JSON of the ‘nameAndSpace’ variable into two Terraform variables and includes both of them plus the number of replicas in a single tfVars file. Terraform then builds an execution plan using these variables. Finally, if I approve the execution plan during the Manual Judgment stage, Armory Enterprise deploys this configuration using Terraform.

The computed terraform execution plan

I hope this post helped you understand the power of the new Armory Evaluate Artifacts stage and how you can use it to leverage SpEL syntax to reference parameters for any artifact type.

Evaluate Artifacts is available as a plugin for Armory Enterprise for Spinnaker 2.24.x or later and OSS Spinnaker 1.24.x or later. You can find more information in our Armory’s Evaluate Artifacts plugin documentation or by reaching out to your CS Representative.

Recently Published Posts

July 26, 2021
|
by Phebe Vickers

A day in the life of a TAM

I’ve been asked what a Technical Account Manager (TAM) does so I wanted to take the opportunity to illustrate it by walking through a standard day in the life. Before we can look at what a day in a life of a TAM is, I should provide some background in what is a TAM and […]

Read more

June 29, 2021
|
by Nikema Prophet

Nikema’s Spinnaker Summit 2021 Recap

My Second Spinnaker Summit is in the Books! Last week I attended and spoke at my second Spinnaker Summit. Like last year’s summit, it was fully virtual. This time Spinnaker Summit was co-located with cdCon and took place on the Hopin platform. Last year, I spoke on a panel about Black professionals a few months […]

Read more

June 28, 2021
|
by Stephen Atwell

Announcing General Availability of Armory Policy Engine Plugin

Armory Policy Engine provides support for automating policy compliance with Spinnaker. Policy Engine Plugin is the latest version of Policy Engine and adds support for both advanced role-based access control (RBAC) use-cases and open source Spinnaker. The release of Policy Engine Plugin comes with new documentation, including a library of example policies from across Armory’s […]

Read more