Deploying OpenCost with a functional UI

Oct 14, 2022 by Stephen Atwell

Introduction

On the OpenCost slack, I keep seeing questions from users struggling to get the openCost UI to run in-cluster. While we wait for a Docker image that bundles the UI to become available, I want to share the configuration I use to run the UI within a Kubernetes cluster.

My configuration is located in this git repository. This blog documents the configuration, how to use it, and why it works the way it does.

Provided Functionality

This repository deploys:

  1. A Prometheus stack that includes the OpenCost scrape configuration
  2. The OpenCost cost model
  3. The OpenCost UI

The example Configuration uses Kustomize in order to both inject a UI container, and to simplify changing configuration variables. Any configuration change to OpenCost’s allocation engine or UI that is done via environment variable can be specified in the kustomization.yml file.

This repository can be used standalone and applied via kubectl, or it can be deployed with Armory CD-as-a-Service. When using CD-as–a-Service, it uses prometheus queries during upgrade to validate that the OpenCost Prometheus queries function. The intention of this validation is to decrease the risk of upgrading to the latest version.

Easily validate that KubeCost and Prometheus are connected and working
Easily validate that OpenCost and Prometheus are connected and working

There is currently no docker image that contains the UI. The provided configuration deploys it by using a Docker image that contains git and npm. This image is configured to checkout the OpenCost repo and start the UI automatically. This is suboptimal since it is always grabbing the latest UI from git every-time the pod starts. Once a Docker image is available that contains the UI, I would recommend switching to it instead so that the pod is immutably defined.

Connecting to the OpenCost UI after deploying

To port forward both the cost model and the UI, run the following commands while connected to the Kubernetes cluster to which you have deployed OpenCost. 

kubectl -n=opencost port-forward service/opencost-ui 1234

kubectl -n=opencost port-forward service/opencost 9090:9003

Behold: the OpenCost UI!

Exposing the UI via Ingress or a Load Balancer

Currently the OpenCost UI does not support being exposed via an Ingress or a Load Balancer. This is because it is hard coded to reference ‘Localhost’ for the URL that the user’s web browser uses to connect to the OpenCost cost Model. Do the following to expose the UI:

  1. Either create an ingress rule or a load balancer for the opencost service. This will make the backend accessible to be queried externally.
  2. In the kustomization.yml file, update the BASE_URL configuration to the external URL exposed in step 1.
  3. Either create an ingress rule or a load balancer for the opencost-ui service. This will make the ui accessible to be queried externally.

Deploying using Kubectl

To deploy using kubectl, clone the repo, and then in its root run:

kustomize build --enable-helm . > manifests.yml

Kubectl apply -f manifests.yml

Deploying using Armory CD-as-a-Service

Armory adds automated upgrade testing that ensures Prometheus is ingesting OpenCost cost data, and container memory data. It also simplifies deploying OpenCost across large numbers of clusters.

If you are not already using Armory CD-as-Service:

  1. Sign up for Armory CD-as-a-Service 
  2. Connect your Kubernetes cluster using the setup wizard.

Now connect the prometheus deployment that this setup will create:

  1. In the CD-as-a-Service UI go to configuration\integrations
  2. Click ‘new integration’
  3. Configure it with: 

CD-as-a-Service can deploy either from its CLI, or using its Github Action. Instructions for both follow. For either path, start by forking this git repository. After forking, open the deploy.yml file, and replace the ‘account’ name on both environments with the name that you specified in step 2 above.

If you are using Github Actions

To deploy using Github Actions you must create a client credential and add it to a Github secret.

  1. In the Armory UI:
    1. Go to ‘Configuration\Client Credentials’
    2. Click ‘create credential’ and give it a name.
    3. Copy the clientID and Secret
  2. In the Github UI, open your forked repository and go to:
    1. Settings
    2. Secrets
    3. Actions
    4. Click ‘New Repository Secret’, name it ‘CDAAS_CLIENT_ID’ and give it the value of the ‘clientID’ that you copied in step 1.
    5. Click ‘New Repository Secret’, name it ‘CDAAS_CLIENT_SECRET’ and give it the value of the secret that you copied in step 1.
  3. In the github UI, enable workflows on your repository.
  4. Make any change to the repository to deploy opencost.

If you are using Armory’s CLI

The cli can be used to deploy using an interactive login, or called from your CI system using client credentials. Here is how you deploy with an interactive login:

  1. Install the CLI by running curl -sL go.armory.io/get-cli | bash
  2. Run armory login to login through the UI
  3. Checkout your forked git repo, and cd into it
  4. Run ./deploy.sh to deploy. This shell script will build the kustomize files, then use the cli to deploy.

Adding Additional Clusters

If you want this configuration to deploy to multiple Kubernetes clusters, copy the environments, and give the copies a new name and update the account. If one of your clusters is a staging cluster that should run before the others you can add a dependsOn constraint to the clusters that depend on it.

Example: add a staging cluster, and making production depend on it

targets:
   #where to deploy code, and in what order. Specifies accounts and namespaces for each application.
   production:
       account: demo-prod-west-cluster
       namespace: opencost
       strategy: opencost
       constraints:
         dependsOn: ["staging"]
   staging: #an example new environment
       account: staging
       namespace: opencost
       strategy: opencost

Now when you deploy every cluster will get the update, and run the query validation logic.

Using your existing Prometheus

If want to use your existing prometheus, make the following changes to your kustomization.yml:

  1. comment out the helm charts section
  2. Comment out the ‘prometheus-overrides’ line
  3. Update the PROMETHEUS_SERVER_ENDPOINT url to the url of your prometheus server. 

You must also ensure you have added the OpenCost scrape configuration to your prometheus server.

Recap

We’ve covered how you can deploy OpenCost to either one or many Kubernetes clusters from a common configuration, and discussed ways to validate its health during deployment, and covered how to get the UI working. We’ve also provided a kustomize-based configuration that allows you to continue inheriting OpenCost’s default configuration while overriding just the portions that you need to change.

Share this post:

Recently Published Posts

What to Look For in Software Delivery Automation Tools

Mar 17, 2023

Software delivery automation tools can accelerate and improve the deployment process. DevOps engineers know that quickly delivering high-quality software to customers is critical to maintaining their company’s competitive advantage. Yet, enterprises often need help with implementing modern software delivery practices.   The market offers many software delivery automation products, each with different features. Choosing the […]

Read more

Release Roundup – March 2023

Mar 16, 2023

Spring has sprung and the Armory crew is feeling the good vibes. Across our continuous deployment solutions, we’re adding features and updates to make developers’ lives easier and help organizations enable better deployment practices at scale.  Here’s how you can stay in the loop on the latest releases, keep up with our various products, as […]

Read more

New Spinnaker Operator Updates Now available for the Spinnaker Community

Mar 15, 2023

Stay up-to-date with the latest Kubernetes release with Spinnaker. The Armory crew has worked diligently the past several weeks to release a new stable version of OSS Operator (1.3.0). This is the first release in just over 18 months and is now available for the open source community.  What Changed? The Spinnaker Operator is the […]

Read more