Red-Black deployment is a release technique that reduces downtime and risk by running two identical production environments called Red and Black.
At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Red is currently live and Black is idle(in this case we have kept the Black down-scaled to zero servers).
As we prepare a new release of our application, deployment and the final stage of testing takes place in the environment that is not live: in this example, Black. Once we have deployed and fully tested the software in Black, we switch the ASG attached behind ELB all incoming requests now go to Black instead of Red. Black is now live, and Red is idle(down-scaled to zero servers).
This technique can eliminate downtime due to application deployment. In addition, Red-Black deployment reduces risk: if something unexpected happens with our new release on Black, we can immediately roll back to the last version by switching back to Red.
Below is the sample Red-Black Deployment Architecture that uses
NetFlix cluster management and deployment management tool
The workflow of above Red-Black architecture is as follows:
* As and when a new release is validated to be stable, new ASG group(lets say Black ASG) is created with the latest LaunchConfig comprising of AMI containing the given new release.
* In this approach we are reusing the ELB which helps us in avoiding any DNS record set changes in Route53
* The newly created Black ASG is attached to the ELB.
* Once all the servers inside the Black ASG passes the predefined health-checks, the Red ASG(which was live) is detached from the ELB. We have complete control over whether we want to scale down the Red ASG to 0 or keep the older version server running (in this deployment scenario we are down scaling it to 0) .
* This approach helps us in seamlessly deploy any new version of application without any down time.
Advantages of re-using ELB:
When switching between the two sets of ASG, use the same ELB for both sets. There are many reasons for this, one really big reason is that while ELBs are elastically scalable, they are also a black box. You don’t have control over how an ELB scales, and if your web service get any sort of decent load, a new ELB will not be able to scale up fast enough to handle the traffic. This will result in dropped packets, and sad users.
When you use the same ELB for both sets of machines, it is already pre-scaled and ready to go. No calling up your AWS Technical Account Manager (TAM) and asking them to pre-scale an ELB for you.
And last but not the least you save cost by not creating additional ELB :).