Apache Forward-Proxy REMOTE_ADDR propagation

I had an interesting problem this morning with the Apache forward-proxy supporting the WTSI sequencing farm.

It would be useful for the intranet service for tracking runs to know which (GA2) sequencer is requesting pages but because they’re on a dedicated subnet they have to use a forward-proxy for fetching pages (and then only from intranet services).

Now I’m very familiar using the X-Forwarded-For header and HTTP_X_FORWARDED_FOR environment variable (and their friends) which do something very similar for reverse-proxies but forward-proxies usually want to disguise the fact there’s an arbitrary number of clients behind them, usually with irrelevant RFC1918 private IP addresses too.

So what I want to do is slightly unusual – take the remote_addr of the client and stuff it into a different header. I could use X-Forwarded-For but it doesn’t feel right. Proxy-Via is also not right here as that’s really for the proxy servers themselves. So, I figured mod_headers on the proxy would allow me to add additional headers to the request, even though it’s forwarded on. Also following a tip I saw here using my favourite mod_rewrite and after a bit of fiddling I can up with this:

#########
# copy remote addr to an internal variable
#
RewriteEngine  On
RewriteCond  %{REMOTE_ADDR}  (.*)
RewriteRule   .*  -  [E=SEQ_ADDR:%1]

#########
# set X-Sequencer header from the internal variable
#
RequestHeader  set  X-Sequencer  %{SEQ_ADDR}

These rules sit in the container managing my proxy, after ProxyRequests and ProxyVia and before a small set of ProxyMatch restrictions.

The RewriteCond traps the contents of the REMOTE_ADDR environment variable (it’s not an HTTP header – it comes from the end of the network socket as determined by the server). The RewriteRule unconditionally copies the last RewriteCond match %1 into a new environment variable SEQ_ADDR. After this mod_headers sets the X-Sequencer request header (for the proxied request) to the value of the SEQ_ADDR environment variable.

This works very nicely though I’d have hoped a more elegant solution would be this:

RequestHeader set X-Sequencer %{REMOTE_ADDR}

but this doesn’t seem to work and I’m not sure why. Anyway, by comparing $ENV{HTTP_X_SEQUENCER} to a shared lookup table, the sequencing apps running on the intranet can now track which sequencer is making requests. Yay!

Massively Parallel Sequence Archive

For some time now at Sanger we’ve been looking at the problems and solutions involved with building services supporting what are likely to become some of the biggest databases on the planet. The biggest problem is there aren't too many people doing this kind of thing and who are willing to talk about it.

The data we’re storing falls into two categories. Short Read Format (SRF) files containing sequence, quality and trace (~10Gb per lane) data and FastQ containing sequence and quality (~1Gb per lane).

Our requirements for these data are fundamentally for two different systems. One is a long-term archival system for SRF, the responsibility for which will eventually be shifted to the EBI . The second is, for me at least, the more interesting system –

The short-term storage of reads and qualities (and possibly also for selected alignments) isn’t the biggest problem – that honour is left to the fast, parallel retrieval of the same. The underlying data store needs to grow at a respectable 12TB per year and serve maybe a hundred simultaneous users requesting up to 1000 sequences per second.

Transfer times for reads are small but as a result are disproportionately affected by artefacts like TCP setup times, HTTP header payloads and certainly index seek times.

We’re looking at a few horizontally-scaling solutions for performing these kinds of jobs – the most obvious are tools like MapReduce and equivalents like Hadoop running with Nutch . My personal favourite and the one I’m holding out for is MogileFS from the same people who brought you Memcached . Time to get benchmarking!

Updated: Loved this via Brad

Infrared Pen MkI

So, this evening, not wanting to spend more time on the computer (having been on it all day for day 2 of DB’s Rails course) I spent my time honing my long-unused soldering skills and constructing the first revision of my infrared marker pen for the JCL-special Wiimote Whiteboard.

The raw materials
Close-up of the LEDs Im removing
The finished article
Close-up of the switch detail
Activated under the IR-sensitive digital camera

I must say it’s turned out ok. I didn’t have any spare small switches so went for a bit of wire with enough springiness in it. On the opposite side of the makeshift switch is a retaining screw for holding the batteries in. I’m using two old AAA batteries (actually running about 2.4V according to the meter) and no resistor in series. The LED hasn’t burnt out yet!

To stop the pen switching on when not in use I slip a bit of electrical tape between the contacts. Obviously you can’t tell when it’s on unless you put in another, perhaps miniature, indicator visible LED.

It all fits together quite nicely though the retaining screw is too close for the batteries and has forced the back end out a bit – that’s easy to fix.

As I’m of course after multitouch I’ll be building the MkII pen soon with the other recovered LED!

Web Frameworking

It seems to be the wrong time to be reading such things, but over on InfoQ there’s a nice article introducing web development of RESTful services using Erlang and the Yaws high performance web server.

I say “the wrong time” as this week has kicked off the “Advancing with Rails” course by David A. Black of Ruby Power and Light fame. The course is fairly advanced in terms of required rails knowledge so it’s a bit of a baptism by fire for me and a few others having never written any Ruby before.

Rails is proving moderately easy to pick up but as I’ve remarked to a couple of people, it doesn’t seem any easier coding with Rails than with Perl. Perhaps it’s because I’ve never done it before but I reckon it’s a lot harder spending my time figuring out what the heck DHH meant something to do than it is doing it myself.

Even though it’s nowhere near as mature, I do reckon my ClearPress framework has a lot going for it – it’s pretty feature-complete in terms of ORM, views and templating ( TT2 ). It has similar convention over configuration features meaning it’s not designed for plugging in other alternative layers but it is absolutely possible to do (and I suspect without as much effort as is required in Rails). I still need to iron out some wrinkles in the autogenerated code from the application builder and provide some default authorisation and authentication mechanisms, some of which may come in the next release. But in the meantime it’s easy to add these features, which is exactly what we’ve done for the new sequencing run tracking app, NPG to tie it to the WTSI website single sign on (MySQL and LDAP under the hood).

Interactivity Experiments

For a few months now I’ve been watching utterly compelling and inspirational HCI things like these:

I know most of them are a bit dated now, in fact from as far back as 2006, but they’re still jaw-droppingly awesome.

So in a fit of inspiration and weekend project madness and frustration at the clumsiness of a regular touch-screen LCD I’ve been picking up things from Ebay and fishing around in my boxes of knackered electronics to find components suitable for assembling one or two of these sorts of devices.

There are two types of these interactive interfaces – the JCL-style wiimote-based ones which use bright sources of infrared, either transmitted or reflected and the bluetooth Nintendo controller; and the second is the Jeff Han / Perceptive Pixel -style of frustrated total internal reflection or FTIR where infrared is reflected out of a planar surface and is picked up by a camera similar to the one in the wiimote.

Anyway, costs so far:

Wiimote: ~£28; old infrared remote control for filters & LEDs: free;

Philips bSure XG2 projector: ~£180; Philips SPC900NC: ~£30; 4.3mm CCTV lens (no IR filter): ~$12

I’ve been having trouble making the bluetooth pairing for the wiimote work correctly under OSX 10.3.9 – I think it’s about time I had the laptop upgraded – it’s work’s after all. I think that should fix it for OSX, but I have had some success – this evening under Ubuntu with the Bluez stack and libwiimote I’ve been able to capture events from the wiimote including spots using the IR camera. I’ve also been successful using camstream with the SPC900NC and CCTV lens to capture spots from working TV remotes, both directly and reflected from a wall – it’s surprisingly effective!

More to come – next with the wiimote interface I need to build my whiteboard-marker battery-driven IR LED pen. Next with the FTIR display I need to experiment with a few different types of perspex and rear-reflection material. I *really* want to be able to perform pattern recognition similar to the reactable and I don’t think tracing paper will work for rear-projection. Knowing next to nothing about plastics technology I think I’d like to try frosted acrylic first, or maybe just finely-sanded regular acrylic. Ebay here I come again!

Development Communications

For a while now, more or less since I switched teams (from Core Web to Sequencing Informatics) I’ve wanted to write more about the work we do at Sanger. There’s so much of it which is absolute cutting edge research and a very large proportion of that is poorly communicated both inside and outside the institute. Most of it’s biology of course, which I know little about, and couldn’t discuss in detail, GCSE being the furthest I took things in that direction.

However some of the great advances have been in big IT. We’re in the same ballpark as CERN’s high-energy physics and NASA’s astronomical data. Technology is something I understand and can talk about here.

So… I run the new sequencing technology pipeline development team. This means I and my team are responsible for ensuring efficient use of the Sanger’s heavy investment in massively parallel sequencing instruments, primarily 28 Illumina Genome Analyzers. To do this we have a farm of 608 cores, a mix of 4- and 8-core Opteron blades with 8Gb RAM and a 320Tb shared Lustre filesystem. It seems to be becoming easy for users and administrators at Sanger to toss these figures around but the truth of the matter is that whilst this kit fits in only a handful of racks, it’s still a pretty big deal.

The blades run linux, Debian Etch to be precise. The Illumina-distributed analysis pipeline (itself a mix of Perl, Python and C++) is held together with Perl applications (web and batch) which also cooperate RESTfully with a series of Rails LIMS applications developed by the Production Software team.

Roughly a terabyte of image data is spun off each of the 28 instruments every 2-3 days. The images are stacked and aligned and sequences are basecalled from spot intensities. These short reads are then packaged up with quality values for each base and dropped into approximately 100Mb compressed result files ready for further secondary analysis (e.g. SNP-calling).

More to come later but for now the take-home message is that the setup we’re using is in my opinion a fair triumph, and definitely one to be proud of. It’s been a (fairly) harmonious marriage of tremendous hardware savvy from the systems group and the rapid turnaround of agile software development from Sequencing Informatics, of which I’m pleased to be a part.

The Importance of Profiling

I’ve worked as a software developer and worked with teams of software developers for around 10 years now, Many of those whom I’ve worked with have earned my trust and respect in relation to development and testing techniques. Frustratingly however it’s still with irritating regularity that I hear throw-away comments bourne of uncertainty and ignorance.

A couple of times now I’ve specifically been told that “GD makes my code go slow”. Now for those of you not in the know GD (actually specifically Lincoln Stein’s GD.pm in perl) is a wrapper around Tom Boutell’s most marvellous libgd graphics library. The combination of these two has always performed excellently for me and never been the bottleneck in any of my applications. The applications in question are usually database-backed web applications with graphics components for plotting genomic features or charts of one sort or another.

As any database-application developer will tell you, the database, or network connection to the database is almost always the bottleneck in an application or service. Great efforts are made to ensure database services scale well and perform as efficiently as possible, but even after these improvements are made they usually simply delay the inevitable.

Hence my frustration when I hear that “GD is making my (database) application go slow”. How? Where? Why? Where’s the proof? It’s no use blaming something, a library in this case, that’s out of your control. It’s hard to believe a claim like that without some sort of measurement.

So.. before pointing the finger, profile the code and make an effort to understand what the profiler is doing. In database applications profile your queries – use EXPLAIN, add indices, record SQL transcripts and time the results. Then profile the code which is manipulating those results.

Once the results are in of course, concentrate in the first instance on the parts with the most impact (e.g. 0.1 second off each iteration of a 1000x loop rather than 1 second from /int main/ ) – the low hanging fruit. Good programmers should be relatively lazy and speeding up code with the least amount of effort should be commonsense.

Great pieces of code

A lot of what I do day-to-day is related to optimisation. Be it Perl code, SQL queries, Javascript or HTML there are usually at least a couple of cracking examples I find every week. On Friday I came across this:

SELECT cycle FROM goldcrest WHERE id_run = ?

This query is being used to find the number of the latest cycles (between 1 and 37 for each id_run) in a near-real-time tracking system and is used several times whenever a run report is viewed.

EXPLAIN SELECT cycle FROM goldcrest WHERE id_run = 231;
  
+----+-------------+-----------+------+---------------+---------+---------+-------+--------+-------------+
| id | select_type | table     | type | possible_keys | key     | key_len | ref   | rows   | Extra       |
+----+-------------+-----------+------+---------------+---------+---------+-------+--------+-------------+
|  1 | SIMPLE      | goldcrest | ref  | g_idrun       | g_idrun |       8 | const | 262792 | Using where |
+----+-------------+-----------+------+---------------+---------+---------+-------+--------+-------------+

In itself this would be fine but the goldcrest table in this instance contains several thousand rows for each id_run. So, for id_run, let’s say, 231 this query happens to return approximately 588,000 rows to determine that the latest cycle for run 231 is the number 34.

To clean this up we first try something like this:

SELECT MIN(cycle),MAX(cycle) FROM goldcrest WHERE id_run = ?

which still scans the 588000 rows (keyed on id_run incidentally) but doesn’t actually return them to the user, only one row containing both values we’re interested in. Fair enough, the CPU and disk access penalties are similar but the data transfer penalty is significantly improved.

Next I try adding an index against the id_run and cycle columns:

ALTER TABLE goldcrest ADD INDEX(id_run,cycle);
Query OK, 37589514 rows affected (23 min 6.17 sec)
Records: 37589514  Duplicates: 0  Warnings: 0

Now this of course takes a long time and, because the tuples are fairly redundant, creates a relatively inefficient index, also penalising future INSERTs. However, casually ignoring those facts, our query performance is now radically different:

EXPLAIN SELECT MIN(cycle),MAX(cycle) FROM goldcrest WHERE id_run = 231;
  
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | rows | Extra                        |
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
|  1 | SIMPLE      | NULL  | NULL | NULL          | NULL |    NULL | NULL | NULL | Select tables optimized away |
+----+-------------+-------+------+---------------+------+---------+------+------+------------------------------+
  
SELECT MIN(cycle),MAX(cycle) FROM goldcrest WHERE id_run = 231;
+------------+------------+
| MIN(cycle) | MAX(cycle) |
+------------+------------+
|          1 |         37 |
+------------+------------+
  
1 row in set (0.01 sec)

That looks a lot better to me now!

Generally I try to steer clear of the mysterious internal workings of database engines, but with much greater frequency come across examples like this:

sub clone_type {
  my ($self, $clone_type, $clone) = @_;
  my %clone_type;

  if($clone and $clone_type) {
    $clone_type{$clone} = $clone_type;
    return $clone_type{$clone};
  }

  return;
}

Thankfully this one’s pretty quick to figure out – they’re usually *much* more convoluted, but still.. Huh??

Pass in a clone_type scalar, create a local hash with the same name (Argh!), store the clone_type scalar in the hash keyed at position $clone, then return the same value we just stored.

I don’t get it… maybe a global hash or something else would make sense, but this works out the same:

sub clone_type {
  my ($self, $clone_type, $clone) = @_;

  if($clone and $clone_type) {
    return $clone_type;
  }
  return;
}

and I’m still not sure why you’d want to do that if you have the values on the way in already.

Programmers really need to think around the problem, not just through it. Thinking through may result in functionality but thinking around results in both function and performance which means a whole lot more in my book, and incidentally, why it seems so hard to hire good programmers.

OpenWRT WDS Bridging

I’ve had a pile of kit to configure recently for an office I’ve been setting up. Amongst the units I specified the second Linksys WRT54GL I’ve had the opportunity to play with.

My one runs White Russian but I took the plunge and went with the latest Kamikaze 7.09 release. It’s a little different to what I’d fiddled with before but probably more intuitive to configure with files rather than nvram variables. I’m briefly going to describe how to configure a wired switch bridged to the wireless network running WDS to the main site router (serving DHCP and DNS).

From a freshly unpacked WRT54GL, connect the ethernet WAN uplink to your internet connection and one of the LAN downlinks to a usable computer. By default the WRT DHCPs the WAN connection and serves DHCP on the 192.168.1 subnet to its LAN.

Download the firmware to the computer then login to the WRT on 192.168.1.1, default account admin/admin.  Upload the image to the firmware upgrade form. Wait for the upload to finish and the router to reboot.

Once it’s rebooted you may need to refresh the DHCP lease on the computer but the default subnet range is the same iirc. telnet to the router on the same address and login as root, no password. Change the password and the SSH service is enabled and telnet service disabled.

I personally prefer using the the x-wrt interface with the Zephyr theme so I install x-wrt by editing /etc/ipkg.conf and appending “src X-Wrt http://downloads.x-wrt.org/xwrt/kamikaze/7.09/brcm-2.4/packages”. Back in the shell run ipkg update ; ipkg install webif . Once completed you should be able to browse to the router’s address (hopefully still 192.168.1.1) and continue the configuration. You may wish to install matrixtunnel for SSL support in the web administration interface.

I want to use this WRT both to extend the coverage of my client’s office wireless network and to connect a handful of wired devices (1 PC, 1 Edgestore NAS and a NSLU2).

So step one is to assign the router a LAN address on my existing network. The WAN port is going to be ignored (although bridging that in as well is probably possible too). In X-wrt under Networks I set a static IP of 192.168.1.253 , netmask of 255.255.255.0 and a router of 192.168.1.254 – the existing main router; BT homehub serving the LAN and whose wireless we’ll be bridging to. The LAN connection type is bridged. DNS in this case is the same as the main router. I’ve left the WAN as DHCP for convenience though the plan is not to use it. Save the settings and apply.

Under Wireless turn the radio on and set the channel to the same as the main router. Choose lan to bridge the wireless network to, set mode to Access Point, WDS on, broadcast ESSID to your personal preference (I set on) and AP isolation off. The ESSID itself needs to be the existing name for your network and encryption set appropriately to match. Save and apply.

Now the magic bit – I’m told this should go in the BSSID box which only seems to be present when mode is set to WDS. What needs to happen is that the WRT needs to know which existing AP to bridge to. Under the hood it’s done using the command wlc wds main-ap-mac-address-here and not having an appropriate text box to put it in it’s almost always possible to fiddle with the startup file. It’s a hack for sure but it seems to work ok for me!

Lo! A WDS bridge.

Update 2007-01-07: After installing the bridge on-site I had to reconfigure it in Client mode using the regular WDS settings as that seemed to be the only way to make it communicate with the Homehub. Pity – that way it doesn’t extend the wireless range, just hooks up anything wired to it. It worked fine when I set it up talking to my wrt.

What can Bioinformatics learn from YouTube?

Caught Matt’s talk this morning at the weekly informatics group meetings

There were general murmurings of agreement amongst the audience but nobody asking the probing questions I’d hope for as a measure of interestedness.

Matt touched upon microformats in all but name – I was really expecting a sell of http://bioformats.org/ , websites as APIs and RESTful web services in particular.

Whilst I’m inclined to agree that standardised, discoverable, reusable web services are largely the way forward (especially as it keeps me in work) I’m not wholly convinced they remove the problems associated with, for example, database connections, database-engine specific SQL, hostnames, ports, accounts etc.

My feeling is that all the problems associated with keeping track of your database credentials are replaced by a different set of problems, albeit more standardised in terms of network protocols in HTTP and REST/CRUD. We now run the risk that what’s fixed in terms of network protocols is pushed higher up the stack and manifests as myriad web services, all different. All these new websites and services use different XML structures and different URL schemes. The XML structures are analogous to database table schema and the URL schemes akin to table or object names.

At least these entities are now discoverable by the end user/developer simply by using the web application – and there’s the big win – transparency and discoverability. There’s also the whole microformat affair – once these really start to take off there’ll be all sorts of arguments about what goes into them, especially in domains like Bio and Chem, not covered by core formats like hCard. But that’s something for another day.

More over at Green Is Good