Best Practices for Containerizing Workloads

Feb 15, 2019 by Armory

When we discuss best practices for creating container-based workloads, there are essentially two things to worry about: How the individual container is built and how the container interacts with all the other containers (as well as components like storage) in the ecosystem it lives in. This is not an exhaustive list—there are whole books dedicated to this topic—but here are four key things to consider as you (and your team) move towards more container-based workloads.

One app per container

Repeat this over and over: One container does one thing. It is not the same as a virtual machine. This is a relatively basic part of creating a containerized workload, but it’s essential to creating a system that works. Here’s why: A container is designed to have the same lifecycle as the app it contains, starting and stopping with that application. This “natural” container lifecycle is disrupted if there are multiple applications in the container, which may or may not start and stop at the same time. It makes it harder to debug and harder for tools like Kubernetes to effectively manage. If you have a classic Apache/MySQL/PHP stack, you will need three containers—one for each component.

Containers should be as simple and as lightweight as possible, with a single application that has only one responsibility.

Fully embrace automation

You never make mistakes, right? The truth is that the more manual your process, the easier it is for errors to slip in. Whether it is spinning up containers, managing the orchestration, provisioning storage or handling deployment, it’s always best practice to rely on tools that automate the process whenever possible. If you can create and deploy your application without ever writing a line of code, you should. While the most obvious part of this should be to use a tool like Docker to create your containers and Kubernetes to manage your container orchestration, neither of those tools are enough to fully automate your container-based development and deployment process.

Part of embracing automation is looking for third-party automation tools. Because while you should be able to automate everything related to building and deploying containers, you don’t want to spend time and money building those tools yourself.

Avoid Cloud Lock-in

The reality is that multi-cloud and hybrid-cloud deployments are becoming industry-standard. When you’re thinking about how to avoid cloud lock-in, here are things to consider:

Ultimately, it’s a good practice to choose tooling that will facilitate deployment of containerized applications in any environment, i.e. on any of the public clouds, on a private cloud or on-premise.

Continuously test, integrate and deliver

Containers themselves are simple—but containerized workloads, made up of 100s of containers, each with its own set of dependencies and lifecycles, are complex. Assuming you’re working on containerized projects as part of a team, it’s essential to understand how your container is going to interact with your colleague’s containers. Building container-based applications requires continually testing and integrating your containers into the larger system, and doing so in an environment that is as similar to the production environment as possible.

This requires a fast, safe and repeatable deployment process. Good news—Armory can help you with that.

In conclusion, the key to success with containers is to keep each container lightweight and focused on a single action, automate everything with third-party tools, avoid lock-in and deploy continuously.

Share this post:

Recently Published Posts

How to Become a Site Reliability Engineer (SRE)

Jun 6, 2023

A site reliability engineer (SRE) bridges the gap between IT operations and software development. They understand coding and the overall task of keeping the system operating.  The SRE role originated to give software developers input into how teams deploy and maintain software and to improve it to increase reliability and performance. Before SREs, the software […]

Read more

Continuous Deployment KPIs

May 31, 2023

Key SDLC Performance Metrics for Engineering Leaders Engineering leaders must have an effective system in place to measure their team’s performance and ensure that they are meeting their goals. One way to do this is by monitoring Continuous Deployment Key Performance Indicators (KPIs).  CD and Automated Tests If you’re not aware, Continuous Deployment, or CD, […]

Read more

What Are the Pros and Cons of Rolling Deployments?

May 26, 2023

Rolling deployments use a software release strategy that delivers new versions of an application in phases to minimize downtime. Anyone who has lived through a failed update knows how painful it can be. If a comprehensive update fails, there are hours of downtime while it is rolled back. Even if the deployment happens after hours, […]

Read more