Leveraging Locust.io Performance Tests in your Deployment Pipeline
Aug 18, 2022 by Stephen Atwell
I recently read a blog on how to load test an application using Locust, Helm, and Kustomize. I wanted to add Locust performance testing into my standard deployment pipeline so that I can ensure that performance tests pass before any application code is released to production. This blog discusses how I took the Locust configuration from that blog and incorporated it into my existing continuous deployment pipeline by leveraging Armory Continuous Deployment-as-a-Service and Github Actions.
I started with an existing application that deployed to 6 environments distributed over 3 Kubernetes clusters, and added a 7th environment — a new performance testing environment. Within the performance testing environment, I deployed Locust alongside my application using the blog’s configuration of the Locust helm chart.
Merging the Configurations
The Kustomize Configuration
I started with the configuration from the other blog and copied it into my application’s git repository. I made a couple of tweaks from this starting point to fit my needs.
The initial configuration leveraged Kustomize’s ability to add a suffix on configuration maps. The suffix ensured Kubernetes would spin up new worker pods on configuration changes. Because I am deploying through Armory CD-as-a-Service, I do not need the suffix because it always creates new pods when deploying. I disabled the suffix by adding the following two lines to the Kustomize file:
generatorOptions: disableNameSuffixHash: true
I also disabled the Horizontal Pod Autoscaler so that I would only have a single worker pod because my cluster is very small:
helmCharts: - name: locust valuesInline: worker: hpa: enabled: false
After these changes, I have the following Kustomize configuration:
helmCharts: - name: locust version: 0.27.1 repo: https://charts.deliveryhero.io/ releaseName: load-test valuesInline: worker: hpa: enabled: false minReplicas: 1 maxReplicas: 2 resources: limits: cpu: 1300m memory: 5G requests: cpu: 1300m memory: 5G loadtest: name: load-test locust_locustfile_configmap: locustfile-cm locust_locustfile: mylocustfile.py # make sure you have permission to load test the host you configure locust_host: http://potato-facts-internal:80 configMapGenerator: - name: locustfile-cm files: - mylocustfile.py # you can delete this 2nd config map if you don't need to mount # additional files alongside the locustfile - name: locustfile-lib-cm files: - lib/example_functions.py generatorOptions: disableNameSuffixHash: true
The Github Actions Workflow Configuration
I trigger my deployment pipeline from Github Actions. I updated my Github Actions workflow to generate manifests from the Kustomize file before starting my deployment. Manifest generation tools like Kustomize and helm generate Kubernetes manifests to deploy. These tools can either directly deploy the manifests, or write them to a file. My workflow builds the Kustomize file, and writes the generated manifests to a temporary file. In a later step we will pass this file to CD-as-a-Service for deployment as part of my larger pipeline.
- name: installHelm uses: yokawasa/[email protected] with: helm: '3.9.2' kustomize: '4.5.7' - name: buildLocust run: kustomize build --enable-helm thirdParty/locust > thirdParty/locust/generatedManifest.yml
The CD-as-a-Service Configuration
I then updated my existing CD-as-a-Service deployment configuration to add a new ‘performance testing’ environment called perf. This environment adds a constraint to require a manual approval. Downstream environments will not deploy until after someone approves this. Instead of approving, I can cancel deployment if the tests fail.
targets: perf: account: demo-staging-cluster namespace: borealis-perftest # just deploy with a rolling update, its staging. strategy: rolling constraints: dependsOn: ["dev"] afterDeployment: - pause: untilApproved: true
I then added the manifest generated by Kustomize, and specified that it should only be deployed to the new environment.
manifests: - path: thirdparty/locust/generatedManifest.yml targets: [perf]
Finally, I updated my production environments to only deploy if the performance testing is successful.
targets: prod-eu: constraints: dependsOn: [ "staging","perf","infosec" ] prod-east: constraints: dependsOn: [ "staging","perf","infosec" ] prod-west: constraints: dependsOn: [ "staging","perf","infosec" ]
Running my Configuration
Now when I commit, my development environment deploys. After it finishes, my 3 staging environments deploy. My integration tests and security scanners are automatically triggered via webhook, and my performance testing environment redeploys Locust. I am not yet automatically starting the Locust test, nor checking its results. My deployment waits for me to run my performance test, and either approve the results or cancel deployment to production. I find a manual approval in my otherwise automated deployment pipeline is an easy way to get a new test running before every code deploy even before the test is fully automated. This intermediate step already ensures that code only reaches the production environments if the performance tests pass.
This performance test environment offers two additional opportunities: full automation, and analysis of application metrics outside of Locust.
For full automation, Locust supports a headless mode, and has an API. The headless mode allows Locust to automatically start a performance test when deployed. I plan to enable this, and then write an automated test that uses Locust’s API to determine whether the test passed. Once I do that, CD-as-a-Service can trigger my test via webhook, and automatically approve or cancel the deployment based off the performance test results.
CD-as-a-Service also supports analysis of application metrics via Datadog, Prometheus, and New Relic. I can use this to repeatedly compare a metric query to a threshold, and automatically rollback if the threshold is exceeded. My application already leverages this to ensure my metrics remain healthy while deploying to my production environment. Once I automate the performance tests, I will also have CD-as-a-Service check these metrics in my performance testing environment while the tests are running. Then if the performance tests cause any application metric to exceed a threshold that would cause a production rollback, we will never deploy to production in the first place.
Trying it for yourself
You can signup and start deploying with Armory CD-as-a-Service for free. Locust.io is open source, and also freely available. Both products have demos available on Youtube that you can watch to get an overview of their capabilities: here’s a demo of Locust, and here is a demo of CD-as-a-Service.
Here is a simplified version of my final configuration. It is comprised of:
This example can run in any cluster. To use it, either give your cluster the name ‘demo-staging-cluster’ when connecting it to CD-as-a-Service, or update the name in the deploy.yml to match your cluster’s name. If you wish to use this example from outside of GitHub Actions, run the ‘generateManifest.sh’ script from the Locust configuration directory to build the manifest with Kustomize. Then run the armory cli and pass it the deploy.yml.
To connect to the Locust UI when your deployment is waiting for the ‘perf’ environment to be approved, run
kubectl port-forward service/load-test-locust 8089:8089 -n=borealis-perftest
Then navigate to localhost:8089 with your web browser.