When we discuss best practices for creating container-based workloads, there are essentially two things to worry about: How the individual container is built and how the container interacts with all the other containers (as well as components like storage) in the ecosystem it lives in. This is not an exhaustive list—there are whole books dedicated to this topic—but here are four key things to consider as you (and your team) move towards more container-based workloads.
One app per container
Repeat this over and over: One container does one thing. It is not the same as a virtual machine. This is a relatively basic part of creating a containerized workload, but it’s essential to creating a system that works. Here’s why: A container is designed to have the same lifecycle as the app it contains, starting and stopping with that application. This “natural” container lifecycle is disrupted if there are multiple applications in the container, which may or may not start and stop at the same time. It makes it harder to debug and harder for tools like Kubernetes to effectively manage. If you have a classic Apache/MySQL/PHP stack, you will need three containers—one for each component.
Containers should be as simple and as lightweight as possible, with a single application that has only one responsibility.
Fully embrace automation
You never make mistakes, right? The truth is that the more manual your process, the easier it is for errors to slip in. Whether it is spinning up containers, managing the orchestration, provisioning storage or handling deployment, it’s always best practice to rely on tools that automate the process whenever possible. If you can create and deploy your application without ever writing a line of code, you should. While the most obvious part of this should be to use a tool like Docker to create your containers and Kubernetes to manage your container orchestration, neither of those tools are enough to fully automate your container-based development and deployment process.
Part of embracing automation is looking for third-party automation tools. Because while you should be able to automate everything related to building and deploying containers, you don’t want to spend time and money building those tools yourself.
Avoid Cloud Lock-in
The reality is that multi-cloud and hybrid-cloud deployments are becoming industry-standard. When you’re thinking about how to avoid cloud lock-in, here are things to consider:
- Do you use proprietary versions of Kubernetes (EKS/AKS/GKE)?
- Is your data, and the way your containers connect to data, portable?
- Do the third-party automation tools you use facilitate portability?
Ultimately, it’s a good practice to choose tooling that will facilitate deployment of containerized applications in any environment, i.e. on any of the public clouds, on a private cloud or on-premise.
Continuously test, integrate and deliver
Containers themselves are simple—but containerized workloads, made up of 100s of containers, each with its own set of dependencies and lifecycles, are complex. Assuming you’re working on containerized projects as part of a team, it’s essential to understand how your container is going to interact with your colleague’s containers. Building container-based applications requires continually testing and integrating your containers into the larger system, and doing so in an environment that is as similar to the production environment as possible.
This requires a fast, safe and repeatable deployment process. Good news—Armory can help you with that.
In conclusion, the key to success with containers is to keep each container lightweight and focused on a single action, automate everything with third-party tools, avoid lock-in and deploy continuously.