If you haven’t yet, read Part 1
From Manual to Automated Deployment in 367 easy steps
After battling for months to go live with version one of what was then our fledgling auction management package, we finally got it out, warts and all. This consisted of a WPF desktop application running on a replicated SQL Database and multiple sites. Soon after that we launched version 1 of the central back end operations web app. And soon after that a new public web site and internal web services, and public integration API services. Of course, every single part was being deployed manually. I knew we had to start automating things, and there was talk of getting a CI server up, but who had time for such things with all the urgent features business needed, and all the production issues that had to be solved at the same time. (facepalm)
Time passed, the system grew, the team grew, the number of applications and endpoints grew, the deployments became longer, the scenarios became more complex, the problems came faster and bigger, business was getting frustrated, developers were getting tired, and the whole thing was getting really brittle and not so much fun no more.
Deployments got to the point where they were taking 2 to 4 hours. And because systems where in an inconsistent state during deployments, we could no longer deploy during business hours. So we started deploying Friday nights at 9PM. Wow, you say, production deployments on a weekly cycle… But they were killing us. We are a bunch of geeks that love programming. Loving infrastructure, server configurations and deployments… not so much. And it was none too healthy on our families and personal lives either.
We tried to get buy-in from business to invest time into this. But when business is facing the choice between getting the new Stupendifyer into production, or letting the dev team spend some quality time with their mysterious processes and problems… Let’s just say, no matter how good intensions are, they always pick the blue pill.
Interlude: My wife, upon seeing the title of this blog post, and coming from a mother’s frame of reference (she gave birth to our 3 awesome children) mentioned that Continuous Delivery quite possibly sounds like one of the worst things she could ever imagine.
Slowly it started dawning on me that the people with the most power to make the dev teams’ life better was the dev team. Business doesn’t live in our world. They do not feel our pain. They do not understand our problems. And hence they would probably never prioritise the ‘features’ we needed. Not because they were being nasty, or spiteful. Merely because they live in a totally different paradigm.
So we stopped asking for permission, and just started making the changes we needed to improve the delivery pipeline interspersed with our ‘normal’ development. It was by no means easy, as there was no let up in pressure, but the team, lead by some champions, kept biting at the problems little bits at a time. And so slowly, systematically, and consistently we started gaining higher ground. One by one, systems were configured to run through the CI server, and then to be deployed automatically. We wrote a tool to apply our DB scripts to the Databases (DbScriptomate), and then automated it. We still deployed on Friday evenings at 9 PM, but deployments started going down to under 2 hours. Then to under 1 hour. Then to 30 minutes.
We got to the glorious place of 100% automated deployments.
We got the automated down, but things were not quite rosy yet. Often things broke because of the changes we deployed, (yes, despite our battalion of automated tests), and then we spent hours on the weekend troubleshooting and sorting out issues. It was time for the next level.
In part 3, we will go from Automated to Continuous