Why Jenkins?

 

I suppose this could just as easily be “Why Continuous Integration?” or “Why Automated Build?” but for the purposes of this article I’m going to concentrate on why we chose to automatically build our solution; and we did this using, among other things, Jenkins.

The Situation

Our code base was around 2.5 million lines of mostly legacy c++ written over the last 20+ years and about 100+ thousand lines of c#. The c++ was all compiled in Visual Studio 2003 and the c# in 2005, 2008 and 2010. Most of the c# would, at the lowest level, use c++ libraries to access the database so we needed to build it after the c++, which itself would take about 4 iterations to build so that all the project dependencies would be resolved. That said, some of the c++ projects would also reference c# components meaning they would have to be rebuilt after we compiled the c#.

All this meant that it would take one person almost a full day to do a build (without the installer) and even then he didn’t always, through no fault of his own, get it right. So occasionally a build would make it to our QA department just before close of play, then first thing in the morning they’d spend an hour reverting their test images and installing the new build only to find out a component hadn’t been built correctly and the whole process would have to start over again. To make matters worse we’d have about 5 different branches under way at any one time piloting new, slightly different functionality for various different clients. Not a nice job for poor Normski.

As you can probably imagine, it doesn’t really get much worse than this.

The Opportunity

The database we used was CodeBase, a derivative of dBase, and it had very much outstayed its welcome. We were constantly encountering issues: data corruption was occurring because file / record locking mechanisms couldn’t handle large files and third parties accessing the data files directly using a different driver; index files were being corrupted causing performance issues; applications were throwing errors all over the place because the platform we’d moved our customers to meant that the data files were being cached by the server’s OS but accessed via a mapped drive by the clients so the “database” quickly moved to an inconsistent state. The issues were fundamental and just went on and on.

You’d be tempted to just throw in the towel and start again (we’d tried it on more than one occasion and it came to naught) but instead the decision was made to move the back-end to a modern RDBMS as, not only was CodeBase directly at fault for many of our problems, due to the restrictions it was placing on the environment, it was stopping us from moving our customers to better hardware, a newer OS and implementing useful and much requested functionality. It was costing us customers.

We settled on PostgreSQL after comparing cost, functionality and robustness and so the long process began of converting the lowest levels of our code to access data in PostgreSQL, hopefully without affecting the logic. We looked at implement a loader strategy to allow us to change between the two databases so that we could gradually switch over but it very quickly became evident that supporting both was adding undue complexity to already very complex code making it more difficult to update, test & debug. Instead we decided to go for the “brave” approach that meant the code wouldn’t run for fairly long stretches of time; we would, however, need to be able to build the code often and run the unit tests.

This wasn’t going to be easy.

The Solution

Because of the need to build so often (I know, I know, we should’ve been doing this anyway) we couldn’t wait for Normski to do a build every time we made a round of changes. We’d heard tell of a man, a mysterious man, that could do this for us without us asking every time a change was submitted to source control and  he’d follow the same steps in the same way every single time. It almost seemed too good to be true. Jenkins was his name.

We got a Jenkins server set up and created a slave so that we could drop and create new slaves without affecting the work we’d done on the master. We added a plugin to Jenkins to hook into our source control that would periodically poll for changes (due to the time it took to build the entire solution we didn’t want to build on every change) and on finding any it would kick off a new build.

The next step was to set up the msbuild plugin so that we could build the first round of c++ code. On the Jenkins build job we could add a separate step for each of the c++ iterations then one for each of the c# solutions, each using different versions of msbuild. We added a command line step to register components that needed registering, a command line step to copy items that needed copying and very quickly it became clear that we should’ve done this a long time ago. We had a bad build process but it was a bad process that was now automated. The build quality went up immediately.

It really is easy to add steps to automate the build even if the build process isn’t what it should be. With time freed up from having to do the build by hand we now had it to spare in order to improve the process. We moved the disparate solutions from the different versions of Visual Studio to one big solution in Visual Studio 2012 and, after ironing out the errors created by moving to a new compiler, we started working on ironing out the inter-project dependencies such that we reduced the number of iterations required to build everything.

We’ve gone from a build taking all day and being unreliable at best to a single pass that takes about 40 minutes and is identical every time. We freed up a resource and it gave us more time to put back into the process.

It’s not perfect but it’s a good start and the sooner you start, the sooner you’ll realize you left it too long to start. After automating the build we had a bit of an uphill struggle to convince others to automate other parts such as the installer build. We were rebuffed with ironic chants of “We haven’t got time to automate the installer, we’ve got 5 different installs to create today”. It took weeks to convince people they should bite the bullet but after spending a couple of days configuring a Jenkins slave to build the installer we were met with gratitude that they now had about a day week freed up for other things.

The Takeaway

I really can’t express how important it is to just do it. Get started. Now!

Setting up a Jenkins server isn’t rocket science, it doesn’t require big, beefy machines and regardless of how complex your build process is, Jenkins will simplify it and make it repeatable.

Getting started with Jenkins

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s