An In-Depth Guide to Blue/Green Deployments with Database Alterations

Dec 20, 2022 by Chris Kotula

Introduction

With the advent of tools like Kubernetes and Spinnaker managing complex deployments becomes easier to understand and execute successfully. The Blue/Green strategy is a prominent example of a deployment pattern that is getting more and more traction in recent years, and it’s no wonder that it’s been chosen as a default approach in many projects and companies all over the world. 

Blue/Green deployments offer advantages such as instantaneous rollbacks in case of problems and the ability to test changes on a production environment before releasing them to the public. These advantages are tempting for many people – starting from customers, who want to use a reliable and secure platform with no bugs, to investors, who count on return on their investment. Between customers and investors there are developers and devops teams, who want to focus on delivering new features in a secure and efficient way.

The Blue/Green deployment strategy solves a lot of issues for developers, however it asks for a couple of things in return. Changes in applications or in databases need to be backwards compatible between two versions of the same application. To accomplish this, developers need to implement breaking changes in an incremental and non-breaking way. There are no shortcuts here, it’s too easy to go astray.

I’d like to invite you on an exciting journey with me. In this article we will together try to deploy an application, track down problems and discover room for improvement, and refactor our setup in a backwards compatible way. 

Assumptions

We have a simple application that writes data to a storage and reads it from there. As we proceed with development, the application’s code will change on each stage. We will start off with a very simple structure and we’ll try to improve it gradually. Some changes will be breaking and it’s up to us to make sure we can deploy new versions without breaking a previous version that runs on a different environment. Each new version of code and database needs to be compatible with the previous one. This approach requires a lot of attention and planning.

The below provided code snippets don’t utilize any frameworks and libraries. That’s on purpose. My goal here was to produce a step-by-step documentation which is framework and tool agnostic. I didn’t want to abstract too many parts of the process with tools-specific solutions.

Example

In today’s example we will try to implement a customer registration process in our application. On first use, the customer needs to provide basic information like name and the address. The application needs to persist those details and should be able to retrieve them from the database for later processing or viewing by an administrator.



Throughout this example we are going to use a relational database and all the database scripts will be written in SQL. Obviously there are a lot of alternatives to both relational storage and migration database migration scripts, but will focus on the most straightforward approach.

You can find working example of an application written with Spring Boot and Postgres here: https://github.com/kkotula/bgdb

Step 1: Database table and designated entity class

As explained above, we need to store customer details in a database table. We will start with something very simple – even too simple – and we will refactor it later. After refactoring we will deploy the changes using the blue/green strategy.

Table looks like this:

CREATE TABLE customers

(

   id           UUID PRIMARY KEY NOT NULL,

   first_name   VARCHAR(50) NOT NULL,

   last_name    VARCHAR(50) NOT NULL,

   street       VARCHAR(50) NOT NULL,

   city         VARCHAR(50) NOT NULL,

   postal_code  VARCHAR(8)  NOT NULL,

   building_no  VARCHAR(5)  NOT NULL,

   apartment_no VARCHAR(50)

);

And corresponding entity class that matches the database table structure:

public class Customer {

   @Id

   private UUID id;

   private String firstName;

   private String lastName;

   private String street;

   private String city;

   private String postalCode;

   private String buildingNo;

   private String apartmentNo;

}

We need one more class – a simple dto that’s sent by UI application. Its structure is the same as the structure of an entity.

public class CustomerDTO {

   private UUID id;

   private String firstName;

   private String lastName;

   private String street;

   private String city;

   private String postalCode;

   private String buildingNo;

   private String apartmentNo;

}

At this point we also need a very simple service that instantiates customer entities and persists it in a database.

public void save(CustomerDTO dto){

   var customer = new Customer();

   customer.setId(UUID.randomUUID());

   customer.setFirstName(dto.getFirstName());

   customer.setLastName(dto.getLastName());

   customer.setCity(dto.getCity());

   customer.setPostalCode(dto.getPostalCode());

   customer.setStreet(dto.getStreet());

   customer.setBuildingNo(dto.getBuildingNo());

   customer.setApartmentNo(dto.getgetApartmentNo());

   customerDatabaseRepository.save(customer);

}

A method to read values would look like this:
public CustomerDTO getCustomerById(UUID customerId){

   var customer = customerDatabaseRepository.getById(customerId);

   var customerDTO = new CustomerDTO();

   customerDTO.setId(customer.getId());

   customerDTO.setFirstName(customer.getFirstName());

   customerDTO.setLastName(customer.getLastName());

   customerDTO.setCity(customer.getCity());

   customerDTO.setPostalCode(customer.getPostalCode());

   customerDTO.setStreet(customer.getStreet());

   customerDTO.setBuildingNo(customer.getBuildingNo());

   customerDTO.setApartmentNo(customer.getgetApartmentNo());

   return customerDTO;

}

The code above fulfills business requirements as it stores a customer’s details, but from a software engineering perspective it doesn’t live up to any modern standards. One thing we could improve is to move address details to a new designated table and to create a new class that would represent addresses in our application.

Step 2: Tables refactoring, data migration and new types

In this step we need to do a couple of things. New address table is needed and also a new class that will represent it in our system. Those tasks are quite easy to pull off, but we need to be extra careful this time! Our main goal is to keep the old application version running with no issues. Any shortcut at this point may lead us astray. It’s better to change software in a safe incremental way.

Let’s start with creating a new table in our database.

CREATE TABLE address

(

   id           UUID PRIMARY KEY NOT NULL,

   customer_id  UUID NOT NULL,

   street       VARCHAR(50) NOT NULL,

   city         VARCHAR(50) NOT NULL,

   postal_code  VARCHAR(8) NOT NULL,

   building_no  VARCHAR(5) NOT NULL,

   apartment_no VARCHAR(50),

   CONSTRAINT fk_customer

       FOREIGN KEY (customer_id)

           REFERENCES customers (id)

);

As a part of this step we need to migrate data from customer table to address table. There are multiple ways to do this, each database implementation has a different way to get around this, so I cannot provide a silver-bullet sql migration script.

When creating a migration script, take into consideration the amount of data you need to move. Search for batch operations you can execute in your database solution.

Next step is to add a new designated class that matches the table structure.

public class Address {

   private UUID id;

   private UUID customerId;

   private String street;

   private String city;

   private String postalCode;

   private String buildingNo;

   private String apartmentNo;

}

So far so good! Having an address in an independent table with a foreign-key reference to the customer is a good idea – it’s a clear separation of concerns, each entity can change independently from each other, and customers may have various addresses like correspondence or delivery. 

It’s tempting to remove old address columns from the customer table and address fields from the Customer entity, but we cannot do it yet! If we removed the columns, the “blue” version of our application would break! It would try to persist/read values to/from the columns that no longer exist! Our production environment would break – that’s never a good situation.

No one – neither customers nor investors – would appreciate our efforts if that’s the result. 

Instead of focusing on the final database and application form, we need to find a middle step in between. That allows two different versions of the application to run in parallel. A service that orchestrates the logic of customer creation needs to persist fields the old way and the new way. It may look like this:

public void save(CustomerDTO dto){

   var customer = new Customer();

   customer.setId(UUID.randomUUID());

   customer.setFirstName(dto.getFirstName());

   customer.setLastName(dto.getLastName());

   customer.setCity(dto.getCity());

   customer.setPostalCode(dto.getPostalCode());

   customer.setStreet(dto.getStreet());

   customer.setBuildingNo(dto.getBuildingNo());

   customer.setApartmentNo(dto.getgetApartmentNo());

   customerDatabaseRepository.save(customer);

   var address = new Address();

   address.setId(UUID.randomUUID());

   address.setCustomerId(customer.getId());

   address.setCity(dto.getCity());

   address.setPostalCode(dto.getPostalCode());

   address.setStreet(dto.getStreet());

   address.setBuildingNo(dto.getBuildingNo());

   address.setApartmentNo(dto.getApartmentNo());

   addressDatabaseRepository.save(address);

}


As you can see, now we store a customer’s address in two places. We are doing that on purpose to assure backwards compatibility and correct behavior of  two different application versions running in parallel. 

Reading values now looks like this:


public CustomerDTO getCustomerById(UUID customerId){

   var customer = customerDatabaseRepository.getById(customerId);

   var customerDTO = new CustomerDTO();

   customerDTO.setId(customer.getId());

   customerDTO.setFirstName(customer.getFirstName());

   customerDTO.setLastName(customer.getLastName());

   var address = addressDatabaseRepository.findAddressByCustomerId(customerId);

   customerDTO.setCity(address.getCity());

   customerDTO.setPostalCode(address.getPostalCode());

   customerDTO.setStreet(address.getStreet());

   customerDTO.setBuildingNo(address.getBuildingNo());

   customerDTO.setApartmentNo(address.getgetApartmentNo());

   return customerDTO; 

}


In this step the read model uses data stored in a new address table. At this point I need to save address details the old way to keep backwards compatible, but reading can be safely switched to the new table.

We can deploy a new “green” version to our environment and test it! When everything is approved, we can switch the traffic from the “blue” version to the “green” one, making our “green” the new ”blue”.  In case of some issues, we can switch back to blue once again with no database changes. Everything is backward compatible and we can switch safely.

 

Step 3: Code cleanup

Now it’s time for the next phase! The version currently running on production – “blue” – still uses address columns in the customer table. Remember – our service persists the data to both customer and address tables. How can we change our code and keep our setup backwards compatible? Let’s just change code for now as old columns are still in use.

Let’s start with removing address fields from customer class.

public class Customer {

   @Id

   private UUID id;

   private String firstName;

   private String lastName;

}


Next step would be to adjust service logic. It doesn’t need to store new customer addresses in the customer table – we can remove that part. Now we persist addresses only in the address table. Code is much cleaner and easier to maintain.


public void save(){

   var customer = new Customer();

   customer.setId(UUID.randomUUID());

   customer.setFirstName(“John”);

   customer.setLastName(“Doe”);

   customerDatabaseRepository.save(customer);

   var address = new Address();

   address.setId(UUID.randomUUID());

   address.setCustomerId(customer.getId());

   address.setCity(“San Mateo”);

   address.setPostalCode(“CA 94401”);

   address.setStreet(“S Ellsworth Ave”);

   address.setBuildingNo(“100”);

   address.setApartmentNo(“80”);

   addressDatabaseRepository.save(address);

}

We removed the address fields from the Customer class, but the columns are in the database and even worse – there are not null constraints applied to some of them! We need to get rid of those constraints, because in other cases we will not be able to successfully persist customers in the customer table.

ALTER TABLE bgdb.customers

   ALTER street DROP NOT NULL,

   ALTER city DROP NOT NULL,

   ALTER postal_code DROP NOT NULL,

   ALTER building_no DROP NOT NULL;

Reading logic doesn’t need to be changed. It’s been updated in the previous version and no additional alterations are required.

Let’s deploy a new version of our application to the “green” environment.  Everything should work fine – tables that write to or read from are all there. Our application can easily interact with the persistence layer. What about the production – “blue”  environment? It still writes address details to two columns – customer and address. It also works with no issues as all tables and columns are there. 

Switching to the new version or rolling back to the previous version is safe as we paid extra attention to the backwards compatibility aspect of our application development. We are good to safely switch production routing to our green environment and mark it as “blue”

Step 4: Database cleanup

Phew, it’s been a long journey, but there’s one more step ahead of us! Luckily it’s very simple and safe to execute. We focused on backwards compatibility on every step we took and now it really pays off. The last change is a missing piece of a puzzle in our migration process. 

Do you remember about the – now excessive and unused – address columns in the `customer` table? We haven’t removed them yet. Can we do it now? The application version stores addresses only in the `address` table. Address columns in a `customer` table are no longer used! We can remove it. Let’s do it with a script below:

ALTER TABLE customers

   DROP COLUMN street,

   DROP COLUMN city,

   DROP COLUMN postal_code,

   DROP COLUMN building_no,

   DROP COLUMN apartment_no

Edge cases and constraints

There are a couple things that you need to be extra careful about when adding and removing columns or tables. Let’s talk about them briefly.

  1. Stored procedures – stored procedures may refer to  columns that seem excessive from the current application’s code perspective. If you remove or rename columns, the stored procedure will stop working correctly. Alter stored procedures in an backwards compatible way.
  2. Triggers – exactly the same case as with stored procedures – if a trigger function uses a column that no longer exists, it may lead to incorrect behavior. You need to update trigger functions the same way you update your tables definitions. Remember about backwards compatibility.
  3. Non-null columns – this type of constraint is very commonly used. It’s very likely that you will need to deal with it in a reasonable way every time you alter  tables and columns definitions. In our example in step 2 we were persisting values in two columns – customer column and address column. If we wrote code in a way that only persists address details in the new address table, non-null constraints in the customer table would prevent us from successful writes.

Blue/Green Spinnaker demo with screenshots

In this last section I’d like to present how easily we can configure a blue/green deployment of our application in Spinnaker. Spinnaker is a great tool that simplifies the entire process and takes care of all the details for us.

We will create a pipeline that has 4 stages:

  1. Create simple service to expose our application publicly,
  2. Deploy version 1 of our app,
  3. Judge manually if we are satisfied with application’s behavior,
  4. Deploy version 2 of our app

Let’s go!

At the very beginning we need to create an empty pipeline:

Then we need to add the first stage – the service to expose our application to the public.

Service manifest is as follows:

apiVersion: v1

kind: Service

metadata:

  name: rs-service

spec:

  ports:

    – port: 8080

      protocol: TCP

      targetPort: 8080

  selector:

    app: rslb

  type: LoadBalancer

The next stage is of the same type, but this time instead of service we need to deploy a replica set of pods with the image. Manifest is as follows:


apiVersion: apps/v1

kind: ReplicaSet

metadata:

  annotations:

    strategy.spinnaker.io/max-version-history: ‘2’

    traffic.spinnaker.io/load-balancers: ‘[“service rs-service”]’

  labels:

    tier: rs-demo

  name: rs-demo

spec:

  replicas: 2

  selector:

    matchLabels:

      tier: rs-demo

  template:

    metadata:

      labels:

        tier: rs-demo

    spec:

      containers:

        – env:

            – name: DB_HOST

              value: postgresql

            – name: DB_PORT

              value: ‘5432’

            – name: DB_USER

              value: admin

            – name: DB_PASS

              value: pass

          image: ‘kkotula/bgdb:v11-x86_64’

          name: bgdb-v1

Specify rollout strategy options:

So far so good! Let’s add a step where we will manually judge if everything works correctly.


At this stage we will call a publicly exposed service to see if the application works as expected. If it is, we will continue to the next step.

In my case the publicly available endpoint is:

http://a2755ecf075de4f969e7f9ffd20ad71d-126852624.us-west-2.elb.amazonaws.com, so I will send a GET request to get a user that I created previously: http://a2755ecf075de4f969e7f9ffd20ad71d-126852624.us-west-2.elb.amazonaws.com:8080/api/v1/customer/03d25df8-37d2-4d6b-afba-c3b47f267d96


The next stage is to deploy a newer application version. Let’s add one more stage of type deployment – you can reuse the replica set manifest from stage 2, you just need to update the image version. Remember about rollout strategies – they need to be the same as in the previous deployment step.

You should have a pipeline like this:

Summary

In this article we went over a simple blue/green deployment that required database alterations in the process. The deployment strategy we used needs a lot of planning to keep at least two different application versions in a backwards compatible mode. To achieve this we introduced changes in a gradual step-by-step way to support previous and new versions of our application. Some of those steps’ only purpose is to keep various versions to run in parallel with no errors and no data losses. It’s tricky on its own, but there are other things to consider such as stored procedures or trigger functions.

Plan ahead each deployment and try to think of any risky areas of your code and used  infrastructure resources. Don’t rush any changes – consult your colleagues and ask for a review of your plan and idea.

Good luck. If you have further questions or need help with your blue/green deployment strategy, reach out to us.

Share this post:

Recently Published Posts

Argo + Armory: Cross-environment orchestration made easy

Feb 1, 2023

Cross-environment orchestration that you don’t have to spend time building At Armory, our goal is software innovation, whether that’s our own Continuous Deployment solutions, or being able to help our customers reach higher innovation peaks within their software development. We’ve taken deliberate steps to make sure our products play well with others, with a focus […]

Read more

Navigating AWS Deployment Targets with Armory

Jan 20, 2023

Many organizations look to Amazon Web Services (AWS) to host and deploy their applications in the cloud. However, they’re finding that their deployment tooling, often built as an extension of their legacy continuous integration (CI), is one of the main impediments to adopting cloud services.  Custom-scripted production pipelines built with in-house tooling need to be […]

Read more

Release Roundup – January 2023

Jan 11, 2023

Get the latest product news on Continuous Deployment-as-a-Service and the most recent release for Continuous Deployment Self Hosted, 2.28.2. Welcome to 2023!  Just like every organization, Armory is looking for ways to improve our practices and deliver more value (and faster!) to you, our customers. That’s why our engineering team is working to deliver features, […]

Read more