packer

Blue-Green deployment using Spinnaker, Packer and Jenkins on AWS

Posted on

Key Objectives:

  1. Need to create immutable server images of windows or linux server.
  2. Need to achieve desire state using configuration management tools like Chef, Powershell etc..
  3. Need to do deployment on public cloud provider like AWS.
  4. Need to deploy web application without downtime.
  5. Need to support the rollback of deployed web application.

Solution:

To manage the continuous delivery with blue green deployment using spinnaker need to have delivery pipeline which consist of build , bake and deploy phase.

Build Phase

Spinnaker provide integration with Jenkins using Igor micro service. Build phase is nothing but Jenkins Job which consists of following stages.

  1. Check out the source code from git/svn tagged branch.
  2. Build the artifact as per build management tool like MsBuild, Maven, Gradle, SBT etc..
  3. Publish the artifact to artifact repository like AWS S3, Apache Artifactory or Nuget Server as per requirement.
  4. Need to pass the output of the build phase like artifact url to the spinnaker which can be used in bake phase. The supported format is json which is accessed using spinnaker expression language

Bake Phase

Baking immutable server images a.k.a golden images for target platform like AWS, GCP etc.. is done by Packer. By default Packer is provided by spinnaker which helps to bake images for Linux server only if want to bake windows server image then need to have packer as a separate component which can be executed through Jenkins. Following steps need to perform in bake phase or pipeline.

  1. Execute Packer Jenkins Job from spinnaker with parameters like artifact url, AWS credential, instance type etc..
  2. Packer is supporting different types of provisioners like chef-client, shell or for windows server Powershell which help to bring the desire state of the immutable server image. e.g. in case of Web Server AMI need to have base image of windows server on top of it need to install Dot.NET, IIS and configure WEB Application.
  3. Once the desire state is achieved then Packer will create AMI on AWS and produced AMI ID as output which can be used for deployment so that need to pass it to Spinnaker.

Deploy Phase

Spinnaker provides the deploy stage in delivery pipeline with different strategies to do the deployment. Red/Black or Blue-Green deployment is one of the strategy which helps to do the deployment without down time.

On AWS to achieve Blue-Green deployment need to have Elastic load balancer (ELB), Auto scale group (ASG) with launch configuration with immutable server image (AMI Id) produced by bake phase. So that every new deployment will create new ASG with launch configuration will point pre-baked AMI ID and this ASG is attached to ELB. Then need to wait to pass health check of ASG launched instances, once health check is successful then downscale old ASG with min-max configuration to zero.

This complicated thing is smoothly handled by Spinnaker. DevOps only need to tune the health check parameter of ELB as per application need.

Conclusion

One can target continuous delivery by following Blue-Green deployment for web applications using Spinnaker, Jenkins and Packer for public cloud provider like AWS, GCP etc. Even Spinnaker integration with Jenkins can help to manage non web resources(databases, platform specific resources) of cloud providers using Terraform.

References

Advertisements

How to do Continuous Integration and Continuous deployment for a target platform

Posted on

Problem

On demand an application should be pushed to a production environment. An application must be validated, tested and tagged before release. Need to provide change history and rollback capability.The deployment process should be applicable to any target platform e.g. public or private cloud. An application should be released without downtime.

Solution:

The solution consists of two phases one is Continuous Integration Pipeline and other is Continuous Deployment Pipeline.

1. Continuous Integration

The steps involve in continuous integration pipeline using Jenkins (build server) are shown below :

  1. Checkout the source code from central repository.
  2. Build and compile source code.
  3. Run static code analysis.
  4. Run unit tests.
  5. Build all artifacts.
  6. Deploy release on dev environment.
  7. Run functional test suite.

If all tests pass (or manually triggered) then promote the build to QA environment and do the following.

  1. Run smoke and sanity tests.
  2. Run all behavior driven acceptance tests.
  3. On success, tag the branch for stage promotion based on convention set.

2. Continuous deployment

The steps involve in continuous deployment pipeline using Jenkins are shown below:

  1. Checkout the source code of tagged release branch (which was tested on stage earlier) from Git.
  2. Build the source code using tools like MSBuild, Maven, Gradle (used to build the application).
  3. Publish the artifact to AWS S3 bucket and update it’s URL in Chef server’s data bag.
  4. AWS S3 bucket is used as artifact storage.
  5. Pre-bake the machine image for target platform using Packer (it is used to build machine images for various target platforms e.g. AWS, GCE, virtual box …etc.) and bring it to the desire state with chef-client.
  6. Deploy pre-baked machine image to target platform using Terraform (it is used across multiple public or private cloud providers for infrastructure provisioning) and update infrastructure information in Consul (used for service discovery and configuration store as key/value pair).
  7. Chef server is loaded with required environment’s roles, cookbooks and data-bags for deployment.
  8. AWS is used as target platform for deployment.
  9. Consul server is deployed on AWS which help’s in service discovery.

For reference

An AWS deployment architecture of two-tire web application with service discovery as shown below:

  1. VPC is used to setup private data center on AWS region which consist of public and private subnets for two or more availability zones.
  2. Internet Gateway to allow public access or services.
  3. Route 53 to manage DNS entry for ELB (elastic load balancing).
  4. Highly available AWS Elastic load balancing.
  5. Autoscale group to manage the web application’s deployment using launch configuration with pre-baked AMI and instance type.
  6. NAT instance to control the public access for private subnet.
  7. Database cluster e.g. MongoDB sharded cluster as storage.
  8. Autoscale group to manage Consul for service discovery and deployment configuration.
  9. Cloudwatch alarm for autoscaling web app.
  10. Cloudwatch for autoscaling web app.
  11. AWS IAM user for managing AWS s3 bucket for Elasticsearch snapshots.
  12. AWS S3 bucket for storing elasticsearch snapshot.

Blue-green Deployments and Roll-backs

There are multiple strategy for managing blue-green or canary deployments on AWS as target platform.
Here a blue-green deployment is achieved by managing an auto-scale group by executing steps given below using Terraform.

  1. Always create new auto-scale group with launch configuration pointing to latest pre-backed AMI from consul then attach an existing elastic load balancer to it.

  2. Update the existing auto-scale group by detaching it from the existing load balancer and give some cool-down period and remove it.

    For rollback need to refer consul for previously deployed pre-backed AMI and repeat the steps 1 and 2 as mentioned above.

Conclusion

This continuous integration and deployment strategy is based on open-source stack. It is also useful for on premise private cloud like openstack, VMware etc. as well as with public cloud provider like AWS, GCE, digital ocean etc. The tools or technology like Packer, Terraform, Consul, Jenkins and Chef truly helps to achieve infrastructure as code and made DevOps life simple :).