Do you feel you deploy often enough?
József Pálfi

Do you feel you deploy often enough?

József Pálfi


In the last decade, elaborating a good Continuous Integration and Continuous Deployment strategy became one of the basic requirements in most of the IT companies.

The average life expectancy of a Fortune 500 company has dramatically fallen down (from 75 years to 15 years) compared to the statistics a century before. ”Unicorns” are growing rapidly, fast reaction time and high availability are more important than ever. How can we guarantee continuous releases without downtime and bugs? How fast the recovery time is? It is recommended to set up a policy, in order to ensure fast release and fast recovery.


Elasticity

First of all, elastic infrastructure is mandatory. A resource overuse at night should not have to impact the performance. If your machines and software are still able to be scaled up without manual interaction, you are on the right way. In case of a monolithic application, after the module borders are clear, a good approach can be to explode it into microservices.


Roles and Tools

It is important to declare clear roles for example, who is responsible for an issue with the production app. Automate everything that is possible. Build, testing, packaging, versioning, release can be aided with clear pipelines, keep it up with the help of a CI/CD tool.

However, CI/CD is not just about the tool itself. All master branches have to be ready to deploy. All developers have to merge with the latest master at least once a day. You have to provide a development-ready environment if a new developer joins your team. If a build fails, it has to be recovered within 10 minutes.

As Jez Jumble says: „Continuous Delivery is the ability to get changes of all types—including new features, configuration changes, bug fixes and experiments—into production, or into the hands of users, safely and quickly in a sustainable way.”


Blue/green deployment

Blue/green deployment is a great way to minimize downtime. You can switch between two deployed versions with a ”router” and this approach also gives a rapid way to rollback, helps to check software variants (e.g. testing how different variants make it easier to interpret incident reports for sysadmins and rollback).


Canary release

The Canary release is another model, where you deploy the application to a few customers, and if it seems to be working fine in production, you can roll out to another part of users and so on. BitNinja agent release works in a similar way. Firstly, we just upgrade on our hosting provider’s servers (45 production servers including shared hosting, individual virtual servers, and SMTP servers). If there are no errors, the new version is going to be public and servers, where the auto-update is turned on, are going to be self-upgraded to the latest version. Meanwhile, we are monitoring some critical points e.g. how load average was changed by the new version. Finally, we send a message to the rest of the agents about the upgrade.



The role of a Pipeline is to describe the steps to execute until the software is deployed to production. With Jenkins and its Pipeline plugin, we can easily implement our pipeline. OpenShift provides CLI tool to help deploy, a clear pipeline can be created with those commands.


Finally, just a few words about client-side CI/C. According to Back-end for Front-End pattern, you create multiple back-ends and front-ends instead of creating a general-purpose back-end, which requires more maintenance. BFFs have to be implemented per specific user-experience, therefore components remain small and focused on the right thing.

Share your ideas with us about this article

Previous posts

The preface of digital war - WannaCry
On 12th May 2017, the biggest cyber attack of recent times has happened and the threat is still present. Started from Europe and within a couple of hours has grown into a worldwide virus. The crisis has been caused by the WannaCry ransomware and its variants. The virus locks the infected computer and informs the users with a message onscreen. They can only continue to use the PC after paying $300 or $600 in BitCoins. According to the experts, the device used during the attack was developed by the renown Shadow Brokers hacker group. The ransomware might have been combined with...
Securing Automated Decryption
In this article, we are writing about how to secure automated decryption, based on Nathaniel McCallum’s presentation at DevConf 2017. One thing is certain, the security of our data is one of the most important things in this digital day and age. We always had a plan to protect our data, but as time changes, that plan has to change as well. Yesterday we had standards that gave us the base of the protection. Today we try to automate the protection, so it can be more secure and problem free. And for tomorrow, we have to come up with policies which allow us to scale the layers of secu...