Pipeline Conference 2014

By: Paul Bowsher

Tags:

  • conferences
  • continuous delivery

Last Tuesday I had the opportunity to represent globaldev at the Pipeline Continuous Delivery conference. We’ve been gradually moving towards CD here and I wanted to get some insight into the problems other teams faced on their journey.

Keynote

EuRuKo 2013
Dave Farley's Keynote talk, discussing the point of software development

We were very fortunate to hear a Keynote from Dave Farley, co-author of Continuous Delivery, the seminal work on the subject. He said that a golden age of software development is ahead as engineers continue to embrace the scientific method - question, hypothesise, predict, test. Continuous Delivery is key to this as it allows us to iterate more quickly on this process and test more hypotheses.

We learned that a key metric for the efficiency of your software delivery process is how quickly you can deliver the smallest change to production with very high confidence. For large banks, this is on the order of 6 months, in ideal circumstances it will be around 30 minutes. For us at globaldev I’d say this is probably around half a day, so whilst we’re doing pretty well there’s still a lot of room for improvement.

One quote that particularly hit home for me following our recent Agile transformation was this from Forrester Research:

If Agile software development was the opening act to a great performance, continuous delivery is the headliner.

For the doubters in the room from large organisations that didn’t think CD would scale to their size, Dave came armed with some incredible stats. Google have one single code repository containing over 100 million lines of code and over 20 new commits every minute. Every single commit is built, with over 100 million test cases executed every day.

Amazon’s stats are already pretty famous in the CD world, they deploy new code to production every 11.6 seconds on average, with 10,000 hosts simultaneously receiving deployments at any one time. Between 2006 and 2011, this resulted in a 75% reduction in outages caused by deployments and a 90% reduction in outage minutes.

HP’s LaserJet firmware team switched to CD and spent a while constructing software simulators for all their printer models, allowing extremely quick testing of new release candidates. Before the switch the team spent 65% of their time porting supporting and testing their code. Now this is dramatically reduced, with 40% of their time being spent on innovation and 20% spent on actively improving the infrastructure around the code.

Dave’s slides are available as a PDF here

Ship it! :ship: :it:

The next talk I attended was by Phil Wills from The Guardian on how they ship code. This was a great free-flowing talk with a lot of conversation with the audience in between slides, with ad-hoc demos of running systems.

Phil questioned the aversion to deploying on Friday. If deployments are such a regularly practiced thing there’s no reason to avoid them at arbitrary times. Risk around deployments should be lowered to the point at which it’s not a concern any more. The excuses for not deploying right now should be removed as much as possible.

The Guardian have built a suite of their own (open source) tools to manage deployments and radiate info about production applications. They use Riff Raff to push code out to AWS via autoscaling and CloudFormation, then their Status App to monitor performance and cost of EC2 instances in production.

The slides from Phil’s talk are available, but only tell a fraction of the story of his talk which was unfortunately not recorded.

Open Space Discussion: Database Migrations

The conference provided a couple of rooms for open discussions on topics of delegates choice. Whoever suggests the topic chairs the discussion.

Alex Yates of Red Gate Software suggested the Database Migrations topic, which was well attended and had some good discussion. Most of it was centered around ensuring QA, Test and Staging environments had accurate schema applied, and making test data available when needed. Unfortunately most of the participants used MSSQL or Oracle, so no one had much of a point of reference for the points I raised.

There were however a couple of DBMS-agnostic takeaways. Firstly, acceptance testing can and should be performed on a minimal set of data in order to speed up setup and teardown times. Production-like data should be reserved for capacity testing. In addition, a good practice is to commit dependent data (e.g. INSERT statements for lookup tables like country codes) to source control alongside your DDL so that a functional database can be recreated from source for Test environments.

Introduction to Docker

Pini Reznik from UglyDuckling gave a great introduction to Docker for those new to the concept. I mainly attended this talk as I wanted to see the demo of UglyDuckling’s new Docker-based PaaS antitude. It’s an interesting concept, but it doesn’t feel like there’s much here that isn’t already included in or on the roadmap of more established PaaSes such as Deis or Cloud Foundry.

Pini’s slides are also available at the Pipeline site.

Honourable mention - Scaling Continuous Delivery at Unruly

This talk was on at the same time as the database discussion, and I wish I’d attended it instead. From the slides it seems like Unruly practice a particularly extreme form of Continuous Delivery - have an idea, write the code, launch it to production and measure it. If it breaks, find out how badly it has broken and fix it very quickly. They term this “NagDD”, Nagios-driven Development.

Whilst this may seem reckless on the surface, we do similar things on the odd occasion. We have a few tasks that run at specific times of day, and if they fail we can simply fix and run them again. Depending on the nature of the change, we will happily “test” this in production and closely monitor the next scheduled run.

Summary

Overall this was a well-organised conference with some great speakers, but there weren’t many delegates in a similar stage of the CD journey to us, so comparing war stories wasn’t a regular occurance. It did however serve to really reinforce the point that CD definitely is not a fad, and is one of the biggest levers a company can pull to improve the velocity of their development team.

Many thanks to Global Personals for sponsoring my attendance, it’ll be great to put more of these techniques into practice.


About the Author

Paul Bowsher

Paul is a Senior Software Engineer at Venntro and enjoys making response time graphs go down and throughput graphs go up. He'll bore you senseless with talk about American Football and is probably the primary contributor of useless gifs in the company Slack chatroom.