Deploying OpenCost with a functional UI
Oct 14, 2022 by Stephen Atwell
Introduction
On the OpenCost slack, I keep seeing questions from users struggling to get the openCost UI to run in-cluster. While we wait for a Docker image that bundles the UI to become available, I want to share the configuration I use to run the UI within a Kubernetes cluster.
My configuration is located in this git repository. This blog documents the configuration, how to use it, and why it works the way it does.
Provided Functionality
This repository deploys:
- A Prometheus stack that includes the OpenCost scrape configuration
- The OpenCost cost model
- The OpenCost UI
The example Configuration uses Kustomize in order to both inject a UI container, and to simplify changing configuration variables. Any configuration change to OpenCost’s allocation engine or UI that is done via environment variable can be specified in the kustomization.yml file.
This repository can be used standalone and applied via kubectl, or it can be deployed with Armory CD-as-a-Service. When using CD-as–a-Service, it uses prometheus queries during upgrade to validate that the OpenCost Prometheus queries function. The intention of this validation is to decrease the risk of upgrading to the latest version.
There is currently no docker image that contains the UI. The provided configuration deploys it by using a Docker image that contains git and npm. This image is configured to checkout the OpenCost repo and start the UI automatically. This is suboptimal since it is always grabbing the latest UI from git every-time the pod starts. Once a Docker image is available that contains the UI, I would recommend switching to it instead so that the pod is immutably defined.
Connecting to the OpenCost UI after deploying
To port forward both the cost model and the UI, run the following commands while connected to the Kubernetes cluster to which you have deployed OpenCost.
kubectl -n=opencost port-forward service/opencost-ui 1234
kubectl -n=opencost port-forward service/opencost 9090:9003
Exposing the UI via Ingress or a Load Balancer
Currently the OpenCost UI does not support being exposed via an Ingress or a Load Balancer. This is because it is hard coded to reference ‘Localhost’ for the URL that the user’s web browser uses to connect to the OpenCost cost Model. Do the following to expose the UI:
- Either create an ingress rule or a load balancer for the opencost service. This will make the backend accessible to be queried externally.
- In the kustomization.yml file, update the BASE_URL configuration to the external URL exposed in step 1.
- Either create an ingress rule or a load balancer for the opencost-ui service. This will make the ui accessible to be queried externally.
Deploying using Kubectl
To deploy using kubectl, clone the repo, and then in its root run:
kustomize build --enable-helm . > manifests.yml
Kubectl apply -f manifests.yml
Deploying using Armory CD-as-a-Service
Armory adds automated upgrade testing that ensures Prometheus is ingesting OpenCost cost data, and container memory data. It also simplifies deploying OpenCost across large numbers of clusters.
If you are not already using Armory CD-as-Service:
- Sign up for Armory CD-as-a-Service
- Connect your Kubernetes cluster using the setup wizard.
Now connect the prometheus deployment that this setup will create:
- In the CD-as-a-Service UI go to configuration\integrations
- Click ‘new integration’
- Configure it with:
- Type = Prometheus
- Name = opencost-Prometheus
- Base URL = http://my-prometheus-server.opencost.svc
- Remote Network Agent = (the one you specified in step 2)
CD-as-a-Service can deploy either from its CLI, or using its Github Action. Instructions for both follow. For either path, start by forking this git repository. After forking, open the deploy.yml file, and replace the ‘account’ name on both environments with the name that you specified in step 2 above.
If you are using Github Actions
To deploy using Github Actions you must create a client credential and add it to a Github secret.
- In the Armory UI:
- Go to ‘Configuration\Client Credentials’
- Click ‘create credential’ and give it a name.
- Copy the clientID and Secret
- In the Github UI, open your forked repository and go to:
- Settings
- Secrets
- Actions
- Click ‘New Repository Secret’, name it ‘CDAAS_CLIENT_ID’ and give it the value of the ‘clientID’ that you copied in step 1.
- Click ‘New Repository Secret’, name it ‘CDAAS_CLIENT_SECRET’ and give it the value of the secret that you copied in step 1.
- In the github UI, enable workflows on your repository.
- Make any change to the repository to deploy opencost.
If you are using Armory’s CLI
The cli can be used to deploy using an interactive login, or called from your CI system using client credentials. Here is how you deploy with an interactive login:
- Install the CLI by running
curl -sL go.armory.io/get-cli | bash
- Run
armory login
to login through the UI - Checkout your forked git repo, and cd into it
- Run
./deploy.sh
to deploy. This shell script will build the kustomize files, then use the cli to deploy.
Adding Additional Clusters
If you want this configuration to deploy to multiple Kubernetes clusters, copy the environments, and give the copies a new name and update the account. If one of your clusters is a staging cluster that should run before the others you can add a dependsOn constraint to the clusters that depend on it.
Example: add a staging cluster, and making production depend on it
targets:
#where to deploy code, and in what order. Specifies accounts and namespaces for each application.
production:
account: demo-prod-west-cluster
namespace: opencost
strategy: opencost
constraints:
dependsOn: ["staging"]
staging: #an example new environment
account: staging
namespace: opencost
strategy: opencost
Now when you deploy every cluster will get the update, and run the query validation logic.
Using your existing Prometheus
If want to use your existing prometheus, make the following changes to your kustomization.yml:
- comment out the helm charts section
- Comment out the ‘prometheus-overrides’ line
- Update the PROMETHEUS_SERVER_ENDPOINT url to the url of your prometheus server.
You must also ensure you have added the OpenCost scrape configuration to your prometheus server.
Recap
We’ve covered how you can deploy OpenCost to either one or many Kubernetes clusters from a common configuration, and discussed ways to validate its health during deployment, and covered how to get the UI working. We’ve also provided a kustomize-based configuration that allows you to continue inheriting OpenCost’s default configuration while overriding just the portions that you need to change.