Announcing Spinnaker Evaluate Artifacts Stage

Announcing Spinnaker Evaluate Artifacts Stage hero graphic

Apr 9, 2021 by Stephen Atwell

Armory’s new Evaluate Artifacts stage allows you to both create an artifact from within a pipeline and to inject Spinnaker parameters into any artifact. Only certain deployment stages, such as deploying a Kubernetes manifest, support using Spring Expression Language (SpEL) to reference parameters out of the box. Other stages, such as the Terraform Integration stage, lack this stage-specific SpEL support. This blog explores how the new ‘Evaluate Artifacts’ stage can be leveraged to inject Spinnaker parameters into your Terraform deployment pipeline.

The pipeline we're going to configure
The pipeline we’re going to configure

The Terraform Script

Before creating our pipeline in Spinnaker, we need to define the Terraform script that we want to deploy. In this case, it is a simple script that deploys an NGINX container. This script contains 3 parameters: the namespace, the deployment name, and the number of replicas. These parameters are what we will also define in the Spinnaker parameters, which are then used for the new Evaluate Artifacts stage. Here is the script I’m deploying:

variable "namespace" {
type = string
variable "deployName" {
type = string
variable "replicas" {
type = number
resource "kubernetes_namespace" "test" {
metadata {
name = var.namespace
resource "kubernetes_deployment" "test" {
metadata {
name = var.deployName
namespace =
spec {
replicas = var.replicas
selector {
match_labels = {
app = "MyTestApp"
template {
metadata {
labels = {
app = "MyTestApp"
spec {
container {
image = "nginx"
name = "nginx-container"
port {
container_port = 80

Configuring the Spinnaker Pipeline

Our Spinnaker pipeline is going to take some parameters from the user. It leverages Armory’s Evaluate Artifacts stage to encode these parameters into Terraform artifacts. Then, it uses Armory’s Terraform stage to create an execution plan and deploy the infrastructure using this plan.


To create my Spinnaker pipeline, I start by adding two parameters. The first is called ‘nameAndSpace’, and it is intended to receive JSON as input. This is useful when you are triggering a pipeline from an external system and want to simplify passing parameters from that system. The second parameter is called ‘replicas’ and is passed as a single value, which is more convenient when humans are manually invoking the pipeline and entering in parameters.

Evaluate Artifacts

In my pipeline, I add an ‘Evaluate Artifacts’ stage. I’m going to use this stage to create a Terraform variable file. In the stage, I add a new artifact with the following payload and name it ‘testvariables.tfVar’


This payload creates an artifact with the correct format for Terraform variables. It contains 3 variables: the first two are read from the JSON that we feed into the ‘nameAndSpace’ parameter, and the third is read directly from the ‘replicas’ parameter. Since replicas is of type integer, it does not need quotes. These are all SpEL expressions.

the configured evaluate variables step that leverages SpEL to read spinnaker parameters and store them in an artifact.

Now, we’ll add a second artifact that contains the Terraform script provided earlier as it’s payload. We’ll name this artifact ‘’.

We could combine both of these artifacts into a single artifact here by directly using SpEL within this second artifact instead of having a separate variables file. However, many companies store their Terraform scripts in source control systems like git. With two files, this second artifact can be seamlessly moved to git while keeping only the Terraform variables within Spinnaker, if so desired.

Terraform Plan

The Terraform plan stage takes the Terraform script, combines it with the Terraform variables file, and builds a Terraform execution plan. This execution plan gets stored in another artifact for later use.

The configuration of the Terraform 'Plan' stage

To configure the stage, add a new ‘Terraform’ stage with an action of ‘Plan’. We select the ‘’ artifact as the ‘Main Terraform artifact’ and also specify the ‘testvariables.tfvar’ artifact as a ‘Terraform Artifact’. Under ‘Produces Artifacts’ we add a new artifact named ‘planfile’ of type ’embedded artifact’. This is where the stage stores the Terraform execution plan.

Manual Judgment

One of the powers of Terraform is that you can review its execution plan before executing it. The Manual Judgment stage gives users a chance to review the output of Terraform plan and decide whether or not they wish to apply it. Many companies do manual review of execution plans before deploying updates to production environments. We do not need to perform any extra configuration on this stage.

Terraform Apply

This stage applies the Terraform execution plan. The stage is of type ‘Terraform’, and its action is ‘Apply’. For the main Terraform artifact we specify ‘’. We also pass the planfile in as an extra Terraform artifact so that the stage uses our execution plan.

The configuration of the Terraform Apply Stage.

Running the Pipeline

To run the pipeline we will specify the following parameters:

nameAndSpace: {"name":"test-deployment","space":"test-space-param"}

replicas: 2

When the pipeline runs, the Evaluate Artifacts stage creates our two artifacts. It converts the JSON of the ‘nameAndSpace’ variable into two Terraform variables and includes both of them plus the number of replicas in a single tfVars file. Terraform then builds an execution plan using these variables. Finally, if I approve the execution plan during the Manual Judgment stage, Armory Enterprise deploys this configuration using Terraform.

The computed terraform execution plan

I hope this post helped you understand the power of the new Armory Evaluate Artifacts stage and how you can use it to leverage SpEL syntax to reference parameters for any artifact type.

Evaluate Artifacts is available as a plugin for Armory Enterprise for Spinnaker 2.24.x or later and OSS Spinnaker 1.24.x or later. You can find more information in our Armory’s Evaluate Artifacts plugin documentation or by reaching out to your CS Representative.

Recently Published Posts

Reliable and Automatic Multi-Target Deployments

May 16, 2022

Multi-target deployments can feel tedious as you deploy the same code over and over to multiple clouds and environments — and none of them in the same way. With an automatic multi-target deployment tool, on the other hand, you do the work once and deliver your code everywhere it needs to be. Armory provides an […]

Read more

Learning out Loud: KubeCon EU edition

May 11, 2022

KubeCon+CloudNativeCon EU is one of the world’s largest tech conferences. Here, users, developers, and companies who have and intend to adopt the Cloud Native standard of running applications with Kubernetes in their organizations come together for 5 days. From May 16-20, 2022, tech enthusiasts will congregate both virtually and in person in Valencia, Spain to […]

Read more

Long-term Support (LTS) Releases

May 9, 2022

Deciding how frequently to release a product is an interesting challenge faced by many companies. There are definite pros and cons related to adjusting your release cadence that have to be evaluated on an individual basis. Faster release cycles in theory might sound good, but of course, there can be tradeoffs. Looking at historical release […]

Read more