Great pieces of code

A lot of what I do day-to-day is related to optimisation. Be it Perl code, SQL queries, Javascript or HTML there are usually at least a couple of cracking examples I find every week. On Friday I came across this:

SELECT cycle FROM goldcrest WHERE id_run = ?

This query is being used to find the number of the latest cycles (between 1 and 37 for each id_run) in a near-real-time tracking system and is used several times whenever a run report is viewed.

EXPLAIN SELECT cycle FROM goldcrest WHERE id_run = 231;
  
+----+-------------+-----------+------+---------------+---------+---------+-------+--------+-------------+
| id | select_type | table     | type | possible_keys | key     | key_len | ref   | rows   | Extra       |
+----+-------------+-----------+------+---------------+---------+---------+-------+--------+-------------+
|  1 | SIMPLE      | goldcrest | ref  | g_idrun       | g_idrun |       8 | const | 262792 | Using where |
+----+-------------+-----------+------+---------------+---------+---------+-------+--------+-------------+

In itself this would be fine but the goldcrest table in this instance contains several thousand rows for each id_run. So, for id_run, let’s say, 231 this query happens to return approximately 588,000 rows to determine that the latest cycle for run 231 is the number 34.

To clean this up we first try something like this:

SELECT MIN(cycle),MAX(cycle) FROM goldcrest WHERE id_run = ?

which still scans the 588000 rows (keyed on id_run incidentally) but doesn’t actually return them to the user, only one row containing both values we’re interested in. Fair enough, the CPU and disk access penalties are similar but the data transfer penalty is significantly improved.

Next I try adding an index against the id_run and cycle columns:

ALTER TABLE goldcrest ADD INDEX(id_run,cycle);
Query OK, 37589514 rows affected (23 min 6.17 sec)
Records: 37589514  Duplicates: 0  Warnings: 0

Now this of course takes a long time and, because the tuples are fairly redundant, creates a relatively inefficient index, also penalising future INSERTs. However, casually ignoring those facts, our query performance is now radically different:

EXPLAIN SELECT MIN(cycle),MAX(cycle) FROM goldcrest WHERE id_run = 231;
  
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra                        |
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
|  1 | SIMPLE      | NULL  | NULL | NULL          | NULL |    NULL | NULL | NULL | Select tables optimized away |
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
  
SELECT MIN(cycle),MAX(cycle) FROM goldcrest WHERE id_run = 231;
+------------+------------+
| MIN(cycle) | MAX(cycle) |
+------------+------------+
|          1 |         37 |
+------------+------------+
  
1 row in set (0.01 sec)

That looks a lot better to me now!

Generally I try to steer clear of the mysterious internal workings of database engines, but with much greater frequency come across examples like this:

sub clone_type {
  my ($self, $clone_type, $clone) = @_;
  my %clone_type;

  if($clone and $clone_type) {
    $clone_type{$clone} = $clone_type;
    return $clone_type{$clone};
  }

  return;
}

Thankfully this one’s pretty quick to figure out – they’re usually *much* more convoluted, but still.. Huh??

Pass in a clone_type scalar, create a local hash with the same name (Argh!), store the clone_type scalar in the hash keyed at position $clone, then return the same value we just stored.

I don’t get it… maybe a global hash or something else would make sense, but this works out the same:

sub clone_type {
  my ($self, $clone_type, $clone) = @_;

  if($clone and $clone_type) {
    return $clone_type;
  }
  return;
}

and I’m still not sure why you’d want to do that if you have the values on the way in already.

Programmers really need to think around the problem, not just through it. Thinking through may result in functionality but thinking around results in both function and performance which means a whole lot more in my book, and incidentally, why it seems so hard to hire good programmers.

OpenWRT WDS Bridging

I’ve had a pile of kit to configure recently for an office I’ve been setting up. Amongst the units I specified the second Linksys WRT54GL I’ve had the opportunity to play with.

My one runs White Russian but I took the plunge and went with the latest Kamikaze 7.09 release. It’s a little different to what I’d fiddled with before but probably more intuitive to configure with files rather than nvram variables. I’m briefly going to describe how to configure a wired switch bridged to the wireless network running WDS to the main site router (serving DHCP and DNS).

From a freshly unpacked WRT54GL, connect the ethernet WAN uplink to your internet connection and one of the LAN downlinks to a usable computer. By default the WRT DHCPs the WAN connection and serves DHCP on the 192.168.1 subnet to its LAN.

Download the firmware to the computer then login to the WRT on 192.168.1.1, default account admin/admin.  Upload the image to the firmware upgrade form. Wait for the upload to finish and the router to reboot.

Once it’s rebooted you may need to refresh the DHCP lease on the computer but the default subnet range is the same iirc. telnet to the router on the same address and login as root, no password. Change the password and the SSH service is enabled and telnet service disabled.

I personally prefer using the the x-wrt interface with the Zephyr theme so I install x-wrt by editing /etc/ipkg.conf and appending “src X-Wrt http://downloads.x-wrt.org/xwrt/kamikaze/7.09/brcm-2.4/packages”. Back in the shell run ipkg update ; ipkg install webif . Once completed you should be able to browse to the router’s address (hopefully still 192.168.1.1) and continue the configuration. You may wish to install matrixtunnel for SSL support in the web administration interface.

I want to use this WRT both to extend the coverage of my client’s office wireless network and to connect a handful of wired devices (1 PC, 1 Edgestore NAS and a NSLU2).

So step one is to assign the router a LAN address on my existing network. The WAN port is going to be ignored (although bridging that in as well is probably possible too). In X-wrt under Networks I set a static IP of 192.168.1.253 , netmask of 255.255.255.0 and a router of 192.168.1.254 – the existing main router; BT homehub serving the LAN and whose wireless we’ll be bridging to. The LAN connection type is bridged. DNS in this case is the same as the main router. I’ve left the WAN as DHCP for convenience though the plan is not to use it. Save the settings and apply.

Under Wireless turn the radio on and set the channel to the same as the main router. Choose lan to bridge the wireless network to, set mode to Access Point, WDS on, broadcast ESSID to your personal preference (I set on) and AP isolation off. The ESSID itself needs to be the existing name for your network and encryption set appropriately to match. Save and apply.

Now the magic bit – I’m told this should go in the BSSID box which only seems to be present when mode is set to WDS. What needs to happen is that the WRT needs to know which existing AP to bridge to. Under the hood it’s done using the command wlc wds main-ap-mac-address-here and not having an appropriate text box to put it in it’s almost always possible to fiddle with the startup file. It’s a hack for sure but it seems to work ok for me!

Lo! A WDS bridge.

Update 2007-01-07: After installing the bridge on-site I had to reconfigure it in Client mode using the regular WDS settings as that seemed to be the only way to make it communicate with the Homehub. Pity – that way it doesn’t extend the wireless range, just hooks up anything wired to it. It worked fine when I set it up talking to my wrt.

What can Bioinformatics learn from YouTube?

Caught Matt’s talk this morning at the weekly informatics group meetings

There were general murmurings of agreement amongst the audience but nobody asking the probing questions I’d hope for as a measure of interestedness.

Matt touched upon microformats in all but name – I was really expecting a sell of http://bioformats.org/ , websites as APIs and RESTful web services in particular.

Whilst I’m inclined to agree that standardised, discoverable, reusable web services are largely the way forward (especially as it keeps me in work) I’m not wholly convinced they remove the problems associated with, for example, database connections, database-engine specific SQL, hostnames, ports, accounts etc.

My feeling is that all the problems associated with keeping track of your database credentials are replaced by a different set of problems, albeit more standardised in terms of network protocols in HTTP and REST/CRUD. We now run the risk that what’s fixed in terms of network protocols is pushed higher up the stack and manifests as myriad web services, all different. All these new websites and services use different XML structures and different URL schemes. The XML structures are analogous to database table schema and the URL schemes akin to table or object names.

At least these entities are now discoverable by the end user/developer simply by using the web application – and there’s the big win – transparency and discoverability. There’s also the whole microformat affair – once these really start to take off there’ll be all sorts of arguments about what goes into them, especially in domains like Bio and Chem, not covered by core formats like hCard. But that’s something for another day.

More over at Green Is Good

7 utilities for improving application quality in Perl

I’d like to share with you a list of what are probably my top utilities for improving code quality (style, documentation, testing) with a largely Perl flavour. In loosely important-but-dull to exciting-and-weird order…

Test::More. Billed as yet another framework for writing test scripts Test::More extends Test::Simple and provides a bunch of more useful methods beyond Simple’s ok(). The ones I use most being use_ok() for testing compilation, is() for testing equality and like() for testing similarity with regexes.

ExtUtils::MakeMaker. Another one of Mike Schwern’s babies, MakeMaker is used to set up a folder structure and associated ‘make’ paraphernalia when first embarking on writing a module or application. Although developers these days tend to favour Module::Build over MakeMaker I prefer it for some reason (probably fear of change) and still make regular mileage using it.

Test::Pod::Coverage – what a great module! Check how good your documentation coverage is with respect to the code. No just a subroutine header won’t do! I tend to use Test::Pod::Coverage as part of…

Test::Distribution . Automatically run a battery of standard tests including pod coverage, manifest integrity, straight compilation and a load of other important things.

perlcritic, Test::Perl::Critic . The Perl::Critic set of tools is amazing. It’s built on PPI and implements the Perl Best Practices book by Damien Conway. Now I realise that not everyone agrees with a lot of what Damien says but the point is that it represents a standard to work to (and it’s not that bad once you’re used to it). Since I discovered perlcritic I’ve been developing all my code as close to perlcritic -1 (the most severe) as I can. It’s almost instantly made my applications more readable through systematic appearance and made faults easier to spot even before Test::Perl::Critic comes in.

Devel::Cover. I’m almost ashamed to say I only discovered this last week after dipping into Ian Langworthy and chromatic’s book ‘Perl Testing’. Devel::Cover gives code exercise metrics, i.e. how much of your module or application was actually executed by that test. It collates stats from all modules matching a user-specified pattern and dumps them out in a natty coloured table, very suitable for tying into your CI system.

Selenium . Ok, not strictly speaking a tool I’m using right this minute but it’s next on my list of integration tools. Selenium is a non-interactive, automated, browser-testing framework written in Javascript. This tool definitely has legs and it seems to have come a long way since I first found it in the middle of 2006. I’m hoping to have automated interface testing up and running before the end of the year as part of the Perl CI system I’m planning on putting together for the new sequencing pipeline.

Hiring Perl Developers – how hard can it be?

All the roles I’ve had during my time at Sanger have more or less required the development of production quality Perl code, usually OO and increasingly using MVC patterns. Why is it then that very nearly every Perl developer I’ve interviewed in the past 8 years is woefully lacking, specifically in OO Perl but more generally in half-decent programming skills?

It’s been astonishing, not in a good way, how many have been unable to demonstrate use of hashes. Some have been too scared of them (their words, not mine) and some have never felt the need. For those of you who aren’t Perl programmers, hashes (aka associative arrays) are a pretty crucial feature of the language and fundamental to its OO implementation.

Now I program in Perl sometimes more than 7-8 hours a day. For many years this also involved reworking other people’s code. I can very easily say that if you claim to be a Perl programmer and have never used hashes then you’re not going to get a Perl-related job because of your technical skills. With a good, interactive and engaging personality and a desire for self-improvement you might get away with it, but certainly not on technical merit.

It’s also quite worrying how many of these interviewees are unable to describe the basics of object-oriented programming yet have, for example, developed and sold a commercial ERP system, presumably for big bucks. Man, these people must have awesome marketing!

Frankly a number of the bioinformaticians already working there have similar skills to the interviewees and often worse communication skills, so maybe I’m simply setting my standards too high.

I really hope this situation improves when Perl 6 goes public though I’m sure it’ll take longer to become common parlance. As long as it happens before those smug RoR types take over the world I’ll be happy ;)

DECIPHERing Large-scale Copy-Number Variations

It’s strange.. Since moving from the core Web Team at Sanger to Sequencing Informatics I’ve been able to reduce my working hours from ~70-80/week all the way down to the 48.5 hours which are actually in my contract.

In theory this means I’ve more spare time, but in reality I’ve been able to secure sensible contract work outside Rentacoder which I’ve relied on in the past.

The work in question is optimising and refactoring for the DECIPHER project which I used to manage the technical side of whilst in the web team.

DECIPHER is a database of large-scale copy number variations (CNVs) from patient arrayCGH data curated by clinicians and cytogeneticists around the world. DECIPHER represents one of the first clinical applications to come out of the HGP data from Sanger.

What’s exciting apart from the medical implications of DECIPHER’s joined-up thinking is that it also represents a valuable model for social, clinical applications in the Web 2.0 world. The application draws in data from various external sources as well as its own curated database. It primarily uses DAS via Bio::Das::Lite and Bio::Das::ProServer and I’m now working on improving interfaces, interactivity and speed by leveraging MVC and SOA techniques with ClearPress and Prototype.

It’s a great opportunity for me to keep contributing to one of my favourite projects and hopefully implement a load of really neat features I’ve wanted to add for a long time. Stay tuned…

VoIP peering & profits

So… shortly, I believe from February next year but am probably mistaken, prices in the UK go up for calling “Lo-Call” 0845 numbers. As I understand it they’ll be similar, or the same as 0870 rates at 20p/min or so.

Now I wonder if the regulator has missed a trick here. It so happens that the nation is converting to broadband, be it ADSL or cable-based, and that very many of those broadband packages now come with VoIP offerings as standard.

My point is that these bundled broadband VoIP packages invariably come with 0845 dial-in numbers and no other choice. Dialing out via your broadband ISP may well be cheap for you but spare a thought for those calling in at much higher rates.

Having been tinkering with VoIP for a good few years I realise that actually this should be ok because calling VoIP-to-VoIP should be free, right? Wrong. Most of these ISPs don’t peer to each others’ networks – for two main reasons as far as I can see –

  1. They’re competitors and have little business reason to peer, apart from keeping the small proportion of aware customers happy.
  2. These ISPs make profits from users dialing in – 0845 is a profit-sharing prefix with which both BT and the ISP in question have a stake. This old story is of course also true of many telephone help-desks and similar. Keeping the customer on the line longer means more profits for the company and its shareholders.

It seems to me that the world could be a better, more communicative place through more thorough VoIP network peering but I simply can’t see it becoming widespread whilst profits stand in the way.

The Simplest of Organisation

Ever since I started implementing SCRUM for my application development at work friends of mine have expressed an interest in the way it works.

Recently even people passing through my office – there talking to my colleagues and who I don’t know very well – have been remarking on the backlogs which are displayed in a prominent position above my desk. I think they’re impressed by the simplicity of the system and how effective it seems to be for me.

I must admit my backlogs are simpler than the full blown setup. As I’m still in the process of hiring, I currently only really develop alone so I’m not bothering with the intermediate item-in-progress stickies.

I also have tasks organised in a 2-dimensional area with axes for complexity and importance. Although sprint backlog tasks are prioritised by my customers, it’s been proving useful to have my take on these attributes displayed spatially rather than just writing ‘3 days’ on the ticket.

In fact I keep my product backlog organised this way as well, as soon as tickets come in. It allows me to relay my take on the tasks to the customers straight away, whether or not we’re building a sprint backlog at the time. When a sprint has finished the product backlog is reorganised to take account of any changes, e.g. to infrastructure, affecting the tasks.

Picking up momentum

It seems people are fairly taken with the BarCamb idea. It’s been lightly advertised internally at Sanger and has been picking up some interest via that and also on the upcoming page .

I wonder how many of the people already signed up actually have something to present. Having been at the WTSI for nearly eight years now I’ve a number of things I could talk about, it’s just a case of deciding which of them would be more interesting for people and that really depends on where attendees are coming from.

So… one or more of the following, of the things I’ve been working on recently – Bio::Das::Lite & Bio::Das::ProServer, ClearPress or the new sequencing technology. Now I’m not a biologist or a chemist either by trade or by hobby and I’m pretty certain that talking about NST is going to be asking for a whole bunch of biology and chemistry-question trouble. I guess DAS-related things are the most useful to present as they have the widest scientific application.

Though there’s nothing like a good bit of self-promotion so maybe something short on ClearPress would be a good thing too. Might need to improve the application builder and test-suite a bit more for that.

In related news, not wanting to be outdone by Matt’s BarCamb I coauthored and submitted a venue proposal for YAPC::Europe 2008 last week. Woohoo! Nail-biting stuff. The genome campus would be a great place to host it for all sorts of reasons – integrated and well supported conference centre; secured financial committment; great science to talk about and a tremendous perl resource to tap into just to list a few.

All I need to do now is submit my travel application for YAPC::Europe Vienna later this year and see how it’s done (again). It’s been a while since I’ve been to a YAPC::Europe!

Barcamp Cambridge

So… BarCamp Cambridge, or BarCamb as we’re affectionately calling it is definitely green for go.

To be hosted at the Wellcome Trust Sanger Institute near Cambridge it’s hopefully going to be a day of grass-roots science and technology talks on the 24th of August. That’s two months away last Sunday so plenty of time to unorganise it.

Should be interesting and I think I’m looking forward to it though I’m not sure what to expect. It could, of course, be an utter disaster, but what better area to have it than Cambridge, and what better site than the Genome Campus, however biased I might be?

I always dread saying this, but “more coming soon” I hope!