Continuous Integration using Jenkins and HashiCorp Terraform with Spinnaker on Amazon EKS hero graphic

Continuous Integration using Jenkins and HashiCorp Terraform with Spinnaker on Amazon EKS

Jul 1, 2020 by Armory

Note: We are delighted to host this guest post by Irshad Buchh from AWS and Meghan Liese from HashiCorp. The post can also be found on the AWS Open Source Blog.

This blog post is the result of a collaboration between Amazon Web Services and HashiCorp. HashiCorp is an AWS Partner Network (APN) Advanced Technology Partner with AWS Competencies in both DevOps and Containers.


Customers running microservices-based applications on Amazon Elastic Kubernetes Service (Amazon EKS) are looking for guidance on architecting complete end-to-end Continuous Integration (CI) and Continuous Deployment / Delivery (CD) pipelines using Jenkins and Spinnaker. Jenkins is a very popular CI server with great community support and many plugins (Slack, GitHub, Docker, Build Pipeline) available. Spinnaker provides automated release, built-in deployment, and supports blue/green deployment out of the box.

This post, a companion piece to Continuous Delivery using Spinnaker on Amazon EKS, focuses on Continuous Integration, and will discuss installation and configuration of Jenkins on Amazon EC2 using Hashicorp Terraform. We will also discuss the creation of Spinnaker pipelines, a combination of stages that enable powerful coordination and branching. These pipelines can be started manually or can be automatically triggered by an event, such as a new Docker image appearing in the Docker registry. Other services and technologies used in this post include Amazon EC2AWS Cloud9Docker Hub, and Amazon EKS.

Overview of concepts


Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. It is controlled via an easy-to use command line interface (CLI), as well as a free-to-use SaaS offering called Terraform Cloud and a private installation for enterprise. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied. The key features of Terraform are: Infrastructure as Code, Execution Plans, Resource Graph, and Change Automation.


Jenkins is a self-contained, open source automation server which can be used to automate all sorts of tasks related to building, testing, and delivering or deploying software. It can be installed through native system packages, Docker, or even run standalone by any machine with a Java Runtime Environment (JRE) installed.


To implement the instructions in this post, you will need the following:


In this post, I will discuss the following architecture for continuous integration:

Continuous Integration Architecture

Fig 1. Continuous Integration Architecture

Overview of steps:

Create a Jenkins CI server using Terraform

Provisioning a Jenkins CI server manually can be error-prone and time-consuming, so I shall be configuring the Jenkins Continuous Server (CI) using Infrastructure as Code (IaC). For this post, I have decided to use Terraform. Log in to the AWS Management Console and create an EC2 key pair (in my examples, the name of the key pair is ibuchh-key) .Using your GitHub account, fork the code sample repository at

From the AWS Cloud9 IDE, open a shell terminal and do the following (replace aws-samples with your GitHub account):

git clone

cd amazon-eks-jenkins-terraform/terraform/

terraform init

terraform plan

terraform apply -auto-approve

Terraform apply

Fig 2. Output of Terraform apply

Terraform apply will also output the IP address of the Jenkins CI server as shown above.

Terraform will provision an AWS EC2 instance and install git, Apache Maven, Docker, Java 8, and Jenkins as shown in the file:

sudo yum -y update

echo "Install Java JDK 8"
sudo yum remove -y java
sudo yum install -y java-1.8.0-openjdk

echo "Install Maven"
sudo yum install -y maven 

echo "Install git"
sudo yum install -y git

echo "Install Docker engine"
sudo yum update -y
sudo yum install docker -y
sudo sudo chkconfig docker on

echo "Install Jenkins"
sudo wget -O /etc/yum.repos.d/jenkins.repo
sudo rpm --import
sudo yum install -y jenkins
sudo usermod -a -G docker jenkins
sudo chkconfig jenkins on

echo "Start Docker & Jenkins services"
sudo service docker start
sudo service jenkins start

Using a browser, open the page at http://jenkins_ip_address:8080; the Jenkins admin page will be displayed:

Jenkins admin page

Fig 3. Jenkins admin page

Using the AWS Cloud9 shell terminal, log in to the Jenkins CI server, find the Administrator password by running the following command:

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Enter this Administrator password on the Jenkins Console by pasting it into the input box, and click Next. Click Install suggested plugin.

Configure Jenkins

1. Plugins:

Log in to the Jenkins console, click Manage Jenkins → Manage Plugins → Available. Choose and install Docker plugin and GitHub Integration Plugin, then restart Jenkins by clicking the Restart Jenkins check box as shown here:

Jenkins Plugins

Fig 4. Jenkins plugins

2. Credentials:

Docker Hub: Click Credentials → global → Add Credentials, choose Username with password as Kind, enter the Docker Hub username and password and use dockerHubCredentials for ID.

GitHub: Click Credentials → Global → Add Credentials , choose Username with password as Kind, enter the GitHub username and password and use gitHubCredentials for ID.

Configure the Jenkins job and pipeline

From the Jenkins console, click New item. Choose Multibranch Pipeline, name it petclinic and click OK.

Jenkins Multibranch Pipeline
 Fig 5. Jenkins Multibranch Pipeline

Choose GitHub and from the drop-down select the GitHub credentials. Enter the GitHub URL as shown below and click Save to save the Jenkins job.

Jenkins job details

Fig 6. Jenkins job details

The Jenkins build executor will check out and scan the GitHub repository and execute the stages in the pipeline as laid out in the Jenkins file shown below. Make sure that you replace the registry with your Docker registry URL inside the build stage.

pipeline {
    agent any
       triggers {
        pollSCM "* * * * *"
    stages {
        stage('Build Application') { 
            steps {
                echo '=== Building Petclinic Application ==='
                sh 'mvn -B -DskipTests clean package' 
        stage('Test Application') {
            steps {
                echo '=== Testing Petclinic Application ==='
                sh 'mvn test'
            post {
                always {
                    junit 'target/surefire-reports/*.xml'
        stage('Build Docker Image') {
            when {
                branch 'master'
            steps {
                echo '=== Building Petclinic Docker Image ==='
                script {
                    app ="ibuchh/petclinic-spinnaker-jenkins")
        stage('Push Docker Image') {
            when {
                branch 'master'
            steps {
                echo '=== Pushing Petclinic Docker Image ==='
                script {
                    GIT_COMMIT_HASH = sh (script: "git log -n 1 --pretty=format:'%H'", returnStdout: true)
                    SHORT_COMMIT = "${GIT_COMMIT_HASH[0..7]}"
                    docker.withRegistry('', 'dockerHubCredentials') {
        stage('Remove local images') {
            steps {
                echo '=== Delete the local docker images ==='
                sh("docker rmi -f ibuchh/petclinic-spinnaker-jenkins:latest || :")
                sh("docker rmi -f ibuchh/petclinic-spinnaker-jenkins:$SHORT_COMMIT || :")

Below is a screenshot of the final run; if all goes well, you will see a new Docker image pushed to your Docker registry.

Pipeline stages

Fig 7. Pipeline stages

Create and configure Spinnaker pipelines

A pipeline is a sequence of stages provided by Spinnaker, ranging from functions that manipulate infrastructure (deploy, resize, disable) to utility scaffolding functions (manual judgment, wait, run Jenkins job) that together precisely define your runbook for managing your deployments. Pipelines help you manage deployments consistently, repeatably, and safely.

1. Log in to the AWS Cloud9 IDE environment and open a new terminal. Run the following command:

kubectl get svc -n spinnaker

Spinnaker UI endpoints

Fig 8. Spinnaker UI endpoints

2.  Using a browser, log in to the Spinnaker UI using the spin-deck-public services endpoint as shown in the output above.

Select the Applications tab, then Actions → Create Application. Enter petclinic as Name and enter a valid email address, leave the rest of the fields blank.

Spinnaker Application

Fig 9. Spinnaker Application

3.  On the Pipelines tab, click Configure a new pipeline , enter DeployToUAT as the Pipeline Name and click Create.

Spinnaker DeployToUAT pipeline

Fig 10. Spinnaker DeployToUAT pipeline

4.  Click Add Artifact and choose GitHub → Kind , File path → kubernetes/petclinic.yaml, Display name → Petclinic-Manifest, Content URL

Pipeline Artifacts

Fig 11. Pipeline artifacts

5.  Click Add Trigger and choose Type → Docker Registry, Registry Name → your Docker registry as configured in Spinnaker, Organization → your Docker registry name, Image → Docker image as created by Jenkins.

Pipeline Trigger

Fig 12. Pipeline trigger

6.  Click Add Stage, choose Stage Type → Deploy (Manifest) , Account → eks-uatApplication → petclinic, Manifest Source → Artifact, Manifest Artifact → Petclinic-Manifest, Artifact Account → spinnaker-github.

Deploy Manifest Stage

Fig 13. Deploy Manifest Stage

7.  Click Save to save the changes to the DeployToUAT pipeline.

8.  Under the PIPELINES tab, click Create , enter ManualApproval as the Pipeline Name and click Create. Click Add Trigger and Choose Type → Pipeline, Application → petclinic, Pipeline → DeployToUAT.

ManualApproval pipeline
 Fig 14. ManualApproval pipeline

9.  Click Add Stage, choose Stage Name → Manual Judgement, under Judgement Inputs add two options Approve and Reject as shown below:

Manual Judgement Stage

Fig 15. Manual judgement stage

10.  Click Save to save the changes to the ManualApproval pipeline.

11.  Under Pipelines tab, click Create , enter DeployToProd as the Pipeline Name and click Create. Click Add Trigger and Choose Type → Pipeline, Application → petclinic, Pipeline → DeployToProd.

12.  Click Add Artifact and choose GitHub → Kind , File path → kubernetes/petclinic.yaml, Display name → Petclinic-ManifestContent URL

Pipeline Artifacts
 Fig 16. Pipeline Artifacts

13.  Click Add Trigger and choose Type → Docker Registry, Registry Name → your Docker registry as configured in Spinnaker, Organization → your Docker registry name, Image → Docker image created by Jenkins.

Fig 17. Pipeline trigger

14.  Click Add Stage, choose Stage Type → Deploy (Manifest) , Account → eks-prod, Application → petclinic, Manifest Source → Artifact, Manifest Artifact → Petclinic-Manifest, Artifact Account → spinnaker-github.

Deploy Manifest Stage

Fig 18. Deploy manifest stage

15.  Click Save to save the changes of the DeployToProd pipeline.

Run Spinnaker pipelines manually

Now run the three pipelines manually. Click Start Manual Execution, choose Pipeline → DeployToUAT, Type → Tag, Tag → enter a valid tag number. Click Run and watch the pipeline execution.

Pipeline Execution

Fig 19. Pipeline execution

Modify code and push the code change using AWS Cloud9

Let us push a code change using AWS Cloud9 and watch the execution of the end-to-end Continuous Integration and Continuous Deployment pipelines in Jenkins and Spinnaker. Open AWS Cloud9 and change welcome to Welcome CI/CD in file and save the file.

Push a Code Change

Fig 20. Push a code change

Open a shell terminal in AWS Cloud9 and run the following commands:

cd environment/amazon-eks-jenkins-terraform
git status
git commit  -am "change"
git push

This will push the code change to the GitHub repository, which will in turn trigger the Jenkins pipeline. The Jenkins pipeline will run the individual stages and push the Docker image to Docker Hub registry. The creation of the new Docker image will trigger the Spinnaker DeployToUAT pipeline, that will in turn trigger the Manual Approval pipeline as shown below. At this time the new code change is delivered to the Amazon EKS UAT cluster: that’s Continuous Delivery.

Spinnaker pipeline

Fig 21. Spinnaker pipeline

Choose Approve as the Judgement Input and click Continue to approve the code change that will trigger the DeployToProd Spinnaker pipeline. The new code change is then deployed to the Amazon EKS production cluster: that’s Continuous Deployment.

Open the load balancer endpoint of the Amazon EKS Production cluster and you will see the new code change:

Application code change

Fig 22. Application code change


To remove the Jenkins instance, run the following commands inside the AWS Cloud9 IDE:

cd environment/amazon-eks-jenkins-terraform/terraform
terraform destroy -auto-approve

terraform destroy

Fig 23. Terraform destroy


In this post, we have outlined the detailed instructions needed to configure a Continuous Integration platform using Terraform and Jenkins on Amazon EKS. Jenkins can integrate with Spinnaker to architect complete CI/CD pipelines. Setting up Jenkins as a Continuous Integration (CI) system within Spinnaker lets you trigger pipelines with Jenkins, add a Jenkins stage to your pipeline, or add a script stage to your pipeline. To learn more about Terraform, see or the Terraform documentation.

Check out the companion blog Continuous Delivery using Spinnaker on Amazon EKS.

If you have questions or suggestions, please join the Spinnaker Slack and tag Irshad (AWS) or contact our friends at Armory.

Meghan Liese is Director of Product Marketing for Terraform at HashiCorp based out of San Francisco, CA.

Irshad A Buchh is a Partner Solutions Architect at Amazon Web Services and he is helping partners and customers in architecting enterprise applications in AWS cloud. For about 25 years, he specialized on the distributed systems and service oriented architecture (SOA) starting with operating systems and virtualization technologies. Lately he has been advising AWS Public Sector partners in their transitioning into AWS cloud with a focus on Containers, Microservices, DevOps and security. He has been a regular speaker at various conferences including AWS reInvent, OpenWorld, KubeCon, IEEE Metrocon and he is passionate about Serverless, DevOps, Amazon ECS, Amazon EKS, and AWS Fargate.

Share this post:

Recently Published Posts

Lambda Deployment is now supported by Armory CD-as-a-Service

Nov 28, 2023

Armory simplifies serverless deployment: Armory Continuous Deployment-as-a-Service extends its robust deployment capabilities to AWS Lambda.

Read more

New Feature: Trigger Nodes and Source Context

Sep 29, 2023

The Power of Graphs for Ingesting and Acting on Complex Orchestration Logic We’ve been having deep conversations with customers and peer thought leaders about the challenges presented by executing multi-environment continuous deployment, and have developed an appreciation for the power of using visual tools such as directed acyclic graphs (DAG) to understand and share the […]

Read more

Continuous Deployments meet Continuous Communication

Sep 7, 2023

Automation and the SDLC Automating the software development life cycle has been one of the highest priorities for teams since development became a profession. We know that automation can cut down on burnout and increase efficiency, giving back time to ourselves and our teams to dig in and bust out innovative ideas. If it’s not […]

Read more