Leveraging Locust.io Performance Tests in your Deployment Pipeline

Aug 18, 2022 by Stephen Atwell

I recently read a blog on how to load test an application using Locust, Helm, and Kustomize. I wanted to add Locust performance testing into my standard deployment pipeline so that I can ensure that performance tests pass before any application code is released to production. This blog discusses how I took the Locust configuration from that blog and incorporated it into my existing continuous deployment pipeline by leveraging Armory Continuous Deployment-as-a-Service and Github Actions.

I started with an existing application that deployed to 6 environments distributed over 3 Kubernetes clusters, and added a 7th environment — a new performance testing environment. Within the performance testing environment, I deployed Locust alongside my application using the blog’s configuration of the Locust helm chart

Merging the Configurations

The Kustomize Configuration

I started with the configuration from the other blog and copied it into my application’s git repository. I made a couple of tweaks from this starting point to fit my needs.

The initial configuration leveraged Kustomize’s ability to add a suffix on configuration maps. The suffix ensured Kubernetes would spin up new worker pods on configuration changes. Because I am deploying through Armory CD-as-a-Service, I do not need the suffix because it always creates new pods when deploying. I disabled the suffix by adding the following two lines to the Kustomize file:

   disableNameSuffixHash: true

I also disabled the Horizontal Pod Autoscaler so that I would only have a single worker pod because my cluster is very small:

- name: locust
        enabled: false

After these changes, I have the following Kustomize configuration:

- name: locust
  version: 0.27.1
  repo: https://charts.deliveryhero.io/
  releaseName: load-test
        enabled: false
        minReplicas: 1
        maxReplicas: 2
          cpu: 1300m
          memory: 5G
          cpu: 1300m
          memory: 5G
      name: load-test
      locust_locustfile_configmap: locustfile-cm
      locust_locustfile: mylocustfile.py
      # make sure you have permission to load test the host you configure
      locust_host: http://potato-facts-internal:80
- name: locustfile-cm
  - mylocustfile.py
# you can delete this 2nd config map if you don't need to mount
# additional files alongside the locustfile
- name: locustfile-lib-cm
   - lib/example_functions.py
 disableNameSuffixHash: true

The Github Actions Workflow Configuration

I trigger my deployment pipeline from Github Actions. I updated my Github Actions workflow to generate manifests from the Kustomize file before starting my deployment. Manifest generation tools like Kustomize and helm generate Kubernetes manifests to deploy. These tools can either directly deploy the manifests, or write them to a file. My workflow builds the Kustomize file, and writes the generated manifests to a temporary file. In a later step we will pass this file to CD-as-a-Service for deployment as part of my larger pipeline.

- name: installHelm
  uses: yokawasa/[email protected]
          helm: '3.9.2'
          kustomize: '4.5.7'
- name: buildLocust
  run: kustomize build --enable-helm thirdParty/locust > thirdParty/locust/generatedManifest.yml

The CD-as-a-Service Configuration

I then updated my existing CD-as-a-Service deployment configuration to add a new ‘performance testing’ environment called perf. This environment adds a constraint to require a manual approval. Downstream environments will not deploy until after someone approves this. Instead of approving, I can cancel deployment if the tests fail.

    account: demo-staging-cluster
    namespace: borealis-perftest
    # just deploy with a rolling update, its staging.
    strategy: rolling
      dependsOn: ["dev"]
      - pause:
          untilApproved: true

I then added the manifest generated by Kustomize, and specified that it should only be deployed to the new environment.

- path: thirdparty/locust/generatedManifest.yml
  targets: [perf]

Finally, I updated my production environments to only deploy if the performance testing is successful.

      dependsOn: [ "staging","perf","infosec" ]
      dependsOn: [ "staging","perf","infosec" ]
      dependsOn: [ "staging","perf","infosec" ]

Running my Configuration

Now when I commit, my development environment deploys. After it finishes, my 3 staging environments deploy. My integration tests and security scanners are automatically triggered via webhook, and my performance testing environment redeploys Locust. I am not yet automatically starting the Locust test, nor checking its results. My deployment waits for me to run my performance test, and either approve the results or cancel deployment to production. I find a manual approval in my otherwise automated deployment pipeline is an easy way to get a new test running before every code deploy even before the test is fully automated. This intermediate step already ensures that code only reaches the production environments if the performance tests pass.

My integration tests and security scanners passed, my performance tests are waiting for a manual review

Next Steps

This performance test environment offers two additional opportunities: full automation, and analysis of application metrics outside of Locust.

For full automation, Locust supports a headless mode, and has an API. The headless mode allows Locust to automatically start a performance test when deployed. I plan to enable this, and then write an automated test that uses Locust’s API to determine whether the test passed. Once I do that, CD-as-a-Service can trigger my test via webhook, and automatically approve or cancel the deployment based off the performance test results.

CD-as-a-Service also supports analysis of application metrics via Datadog, Prometheus, and New Relic. I can use this to repeatedly compare a metric query to a threshold, and automatically rollback if the threshold is exceeded. My application already leverages this to ensure my metrics remain healthy while deploying to my production environment. Once I automate the performance tests, I will also have CD-as-a-Service check these metrics in my performance testing environment while the tests are running. Then if the performance tests cause any application metric to exceed a threshold that would cause a production rollback, we will never deploy to production in the first place.

Trying it for yourself

You can signup and start deploying with Armory CD-as-a-Service for free. Locust.io is open source, and also freely available. Both products have demos available on Youtube that you can watch to get an overview of their capabilities: here’s a demo of Locust, and here is a demo of CD-as-a-Service.

Here is a simplified version of my final configuration. It is comprised of:

This example can run in any cluster. To use it, either give your cluster the name ‘demo-staging-cluster’ when connecting it to CD-as-a-Service, or update the name in the deploy.yml to match your cluster’s name. If you wish to use this example from outside of GitHub Actions, run the ‘generateManifest.sh’ script from the Locust configuration directory to build the manifest with Kustomize. Then run the armory cli and pass it the deploy.yml. 

To connect to the Locust UI when your deployment is waiting for the ‘perf’ environment to be approved, run 

kubectl port-forward service/load-test-locust 8089:8089 -n=borealis-perftest

Then navigate to localhost:8089 with your web browser.

Share this post:

Recently Published Posts

Continuous Deployments meet Continuous Communication

Sep 7, 2023

Automation and the SDLC Automating the software development life cycle has been one of the highest priorities for teams since development became a profession. We know that automation can cut down on burnout and increase efficiency, giving back time to ourselves and our teams to dig in and bust out innovative ideas. If it’s not […]

Read more

Happy 7th Birthday, Armory!

Aug 21, 2023

Happy 7th birthday, Armory! Today we’re celebrating Armory’s 7th birthday. The parenting/startups analogy is somewhat overused but timely as many families (at least in the US) are sending their kids back to school this week. They say that parenting doesn’t get easier with age – the challenges simply change as children grow, undoubtedly true for […]

Read more

Visit the New Armory Developer Portal

Aug 11, 2023

Easier Access to Tutorials, Release Notes, Documentation, and More! Developer Experience (DX) is one of Armory’s top focuses for 2023. In addition to improving developer experience through Continuous Deployment, we’re also working hard to improve DX for all of our solutions.  According to ThoughtWorks, poor information management and dissemination accounts for a large percentage of […]

Read more