Proxy testing with IP Namespaces and GitLab CI/CD

CC-BY-NC https://www.flickr.com/photos/thomashawk/106559730

At work, I have a CLI tool I’ve been working on. It talks to the web and is used by customers all over the planet, some of them on networks with tighter restrictions than my own. Often those customers have an HTTP proxy of some sort and that means the CLI application needs to negotiate with it differently than it would directly with a web server.

So I need to test it somehow with a proxy environment. Installing a proxy service like Squid doesn’t sound like too big a deal but it needs to run in several configurations, at a very minimum these three:

  • no-proxy
  • authenticating HTTP proxy
  • non-authenticating HTTP proxy

I’m going to ignore HTTPS proxy for now as it’s not actually a common configuration for customers but I reckon it’s possible to do with mkcert or LetsEncrypt without too much work.

There are two other useful pieces of information to cover, firstly I use GitLab-CI to run the CI/CD test stages for the three proxy configurations in parallel. Secondly, and this is important, I must make sure that, once the test Squid proxy service is running, the web requests in the test only pass through the proxy and do not leak out of the GitLab runner. I can do this by using a really neat Linux feature called IP namespaces.

IP namespaces allow me to set up different network environments on the same machine, similar to IP subnets or AWS security groups. Then I can launch specific processes in those namespaces and network access from those processes will be limited by the configuration of the network namespace. That is to say, the Squid proxy can have full access but the test process can only talk to the proxy. Cool, right?

The GitLab CI/CD YAML looks like this (edited to protect the innocent)

stages:
- integration

.integration_common: &integration_common |
apt-get update
apt-get install -y iproute2

.network_ns: &network_ns |
ip netns add $namespace
ip link add v-eth1 type veth peer name v-peer1
ip link set v-peer1 netns $namespace
ip addr add 192.168.254.1/30 dev v-eth1
ip link set v-eth1 up
ip netns exec $namespace ip addr add 192.168.254.2/30 dev v-peer1
ip netns exec $namespace ip link set v-peer1 up
ip netns exec $namespace ip link set lo up
ip netns exec $namespace ip route add default via 192.168.254.1

noproxynoauth-cli:
image: ubuntu:18.04
stage: integration
script:
- *integration_common
- test/end2end/cli

proxyauth-cli:
image: ubuntu:18.04
stage: integration
script:
- *integration_common
- apt-get install -y squid apache2-utils
- mkdir -p /etc/squid3
- htpasswd -cb /etc/squid3/passwords testuser testpass
- *network_ns
- squid3 -f test/end2end/conf/squid.conf.auth && sleep 1 || tail -20 /var/log/syslog | grep squid
- http_proxy=http://testuser:testpass@192.168.254.1:3128/ https_proxy=http://testuser:testpass@192.168.254.1:3128/ ip netns exec $namespace test/end2end/cli
- ip netns del $namespace || true
variables:
namespace: proxyauth

proxynoauth-cli:
image: ubuntu:18.04
stage: integration
script:
- *integration_common
- apt-get install -y squid
- *network_ns
- squid3 -f test/end2end/conf/squid.conf.noauth && sleep 1 || tail -20 /var/log/syslog | grep squid
- http_proxy=http://192.168.254.1:3128/ https_proxy=http://192.168.254.1:3128/ test/end2end/cli
- ip netns del $namespace || true
variables:
namespace: proxynoauth

So there are five blocks here, with three stages and two common script blocks. The first common script block installs iproute2 which gives us the ip command.

The second script block is where the magic happens. It configures a virtual, routed subnet in the parameterised $namespace.

Following that we have the three test stages corresponding to the three proxy (or not) configurations I listed earlier. Two of them install Squid, one of those creates a test user for authenticating with the proxy. They all run the test script, which in this case is test/end2end/cli. When those three configs are modularised and out like this with the common net namespace script as well it provides a good deal of clarity to the test maintainer. I like it a lot.

So then the last remaining things are the respective squid configurations: proxyauth and proxynoauth. There’s a little bit more junk in these than there needs to be as they’re taken from the stock examples, but they look something like this:

 visible_hostname proxynoauth
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3128

and for authentication:

 visible_hostname proxyauth
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager

auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid3/passwords
auth_param basic realm proxy
acl authenticated proxy_auth REQUIRED

http_access allow authenticated
http_access deny all
http_port 3128

And there you have it – network-restricted proxy testing with different proxy configurations. It’s the first time I’ve used ip net ns without being wrapped up in Docker, LXC, containerd or some other libvirt thing, but the feeling of power from my new-found network-god skills is quite something :)

Be aware that you might need to choose different subnet ranges if your regular LAN conflicts. Please let me know in the comments if you find this useful or if you had to modify things to work in your environment.

Pushing Jenkins Job Build Statuses to Geckoboard

geckoboard

I love using Geckoboard. I love using Jenkins. I do have a few issues connecting the two though.

My Jenkins build cluster sits inside my corporate network and while there is a Jenkins plugin for Geckoboard it will only connect to Jenkins instances it can see on the public internet. I haven’t yet found a Geckoboard plugin for Jenkins to push results out through either. One day soon I’ll be annoyed enough to learn some Java and write one but until then I have a hack.

The core configurations of most of my Jenkins jobs runs approximately on these lines:

make deb && scp *deb deb-repo.my.net:/var/www/apt/incoming/

i.e. build a .deb (for Ubuntu) and if successful, copy and queue it for indexing by reprepro on my .deb repository server.

Now in Geckoboard I can configure a 1×1 Custom Text widget for PUSH data and publish data to it like so:

curl https://push.geckoboard.com/v1/send/F639F1AE-2227-11E4-A773-8FE5A58BF7C4 \
-d "{"api_key":"AC738FE5A58BF7C4","data":{"item":[{"text":"packagename.deb","type":0}]}}"

Let’s make it a little more sustainable. In the main Jenkins configuration I set up a global environment variable called GECKO_APIKEY with a value of AC738FE5A58BF7C4. Now the line reads:

curl https://push.geckoboard.com/v1/send/F639F1AE-2227-11E4-A773-8FE5A58BF7C4 \
-d "{"api_key":"$GECKO_APIKEY","data":{"item":[{"text":"packagename.deb","type":0}]}}"

I know I’ll need to change the posted data on failure which most like means duplicating some or all of that line so I’ll extract the widget id too. The job is now configured like:

export WIDGET=F639F1AE-2227-11E4-A773-8FE5A58BF7C4
make deb && scp *deb deb-repo.my.net:/var/www/apt/incoming/
curl https://push.geckoboard.com/v1/send/$WIDGET \
 -d "{"api_key":"$GECKO_APIKEY","data":{"item":[{"text":"packagename.deb","type":0}]}}"

But it’s not yet triggered differently on success or failure, so…

export WIDGET=F639F1AE-2227-11E4-A773-8FE5A58BF7C4
make deb && scp *deb deb-repo.my.net:/var/www/apt/incoming/  && \
curl https://push.geckoboard.com/v1/send/$WIDGET \
 -d "{"api_key":"$GECKO_APIKEY","data":{"item":[{"text":"packagename.deb","type":0}]}}" || \
curl https://push.geckoboard.com/v1/send/$WIDGET \
 -d "{"api_key":"$GECKO_APIKEY","data":{"item":[{"text":"packagename.deb","type":1}]}}"

The duplicate URL and packagename.deb are annoying aren’t they? A quick look at the Jenkins docs reveals $JOB_NAME has what we want.

export WIDGET=F639F1AE-2227-11E4-A773-8FE5A58BF7C4
export GECKO_URL=https://push.geckoboard.com/v1/send/$WIDGET
make deb && scp *deb deb-repo.my.net:/var/www/apt/incoming/  && \
curl $GECKO_URL \
 -d "{"api_key":"$GECKO_APIKEY","data":{"item":[{"text":"$JOB_NAME PASS","type":0}]}}" || \
curl $GECKO_URL \
 -d "{"api_key":"$GECKO_APIKEY","data":{"item":[{"text":"$JOB_NAME FAIL","type":1}]}}"

Not too bad. It even works on Windows without too many modifications – “set” instead of “export”, %VAR% instead of $VAR and a Windows curl binary added to %PATH%.

 

Note: All API keys and Widget Ids have been changed to protect the innocent.

Content Delivery Network (CDN) using Linode VPS

This month one of the neat things I’ve done was to set up a small content delivery network (CDN) for speedy downloading of files across the globe. For one reason and another (mostly the difficulty in doing this purely with DNS and the desire not to use AWS), I opted to do this using my favourite VPS provider, Linode. All in all (and give or take DNS propagation time) I reckon it’s possible to deploy a multi-site CDN in under 30 minutes given a bit of practice. Not too shabby!

For this recipe you will need:

  1. Linode account
  2. A domain name and DNS management

What you’ll end up with:

  1. 3x Ubuntu 12.04 LTS VPS, one each in London, Tokyo and California
  2. 3x NodeBalancers, one each in London, Tokyo and California
  3. 1x user-facing general web address
  4. 3x continent-facing web addresses

I’m going to use “mycdn.com” wherever I refer to my DNS / domain. You should substitute your domain name wherever you see it.

So, firstly log in to Linode.

Create three new Linode 1024 small VPSes (or whatever size you think you’ll need). I set mine up as Ubuntu 12.04 LTS with 512MB swap but otherwise nothing special. Set one each to be in London, Tokyo and Fremont. Set the root password on each. Under “Settings”, give each VPS a label. I called mine vps-<city>-01. Under “Remote Settings”, give each a private IP and note them down together with the VPS/data centre they’re in.

At this point it’s also useful (but not strictly necessary) to give each node a DNS CNAME for its external IP address, just so you can log in to them easily by name later.

Boot all three machines and check you can login to them. I find it useful here to do an

apt-get update ; apt-get dist-upgrade.

You can also now install Apache and mod_geoip on each node:

apt-get install apache2 libapache2-mod-geoip
a2enmod include
a2enmod rewrite

You should now be able to bring up a web browser on each VPS (public IP or CNAME) in turn and see the default Apache “It works!” page.

Ok.. still with me? Next we’ll ask Linode to fire up three NodeBalancers, again one in each of the data centres, for each VPS. I labelled mine cdn-lb-<city>-01. Each one can be configured with a Port, 80 with, for now, the default settings. Add a host to each NodeBalancer with the private IP of each VPS, and the port, e.g. 192.168.128.123:80 . Note that each VPS hasn’t yet been configured to listen on those interfaces so each NodeBalancer won’t recognise its host as being up.

Ok. Let’s fix those private interfaces. SSH into each VPS using the root account and the password you set earlier. Edit /etc/network/interfaces and add:

auto eth0:1
iface eth0:1 inet static
	address <VPS private address here>
	netmask <VPS private netmask here>

Note that your private netmask is very unlikely to be 255.255.255.0 (probably) like your home network and yes, this does make a difference. Once that configuration is in, you can:

ifup eth0:1

Now we can add DNS CNAMEs for each NodeBalancer. Take the public IP for each NodeBalancer over to your DNS manager and add a meaningful CNAME for each one. I used continental regions americas, apac, europe, but you might prefer to be more specific than that (e.g. us-west, eu-west, …). Once the DNS propagates you should be able to see each of your Apache “It works!” pages again in your browser, but this time the traffic is running through the NodeBalancer (you might need to wait a few seconds before the NodeBalancer notices the VPS is now up).

Ok so let’s take stock. We have three VPS, each with a NodeBalancer and each running a web server. We could stop here and just present a homepage to each user telling them to manually select their local mirror – and some sites do that, but we can do a bit better.

Earlier we installed libapache2-mod-geoip. This includes a (free) database from MaxMind which maps IP address blocks to the continents they’re allocated to (via the ISP who’s bought them). The Apache module takes the database and sets a series of environment variables for each and every visitor IP. We can use this to have a good guess at roughly where a visitor is and bounce them out to the nearest of our NodeBalancers – magic!

So, let’s poke the Apache configuration a bit. rm /etc/apache2/sites-enabled/000-default. Create a new file /etc/apache2/sites-available/mirror.mycdn.com and give it the following contents:

<VirtualHost>
	ServerName mirror.mycdn.com
	ServerAlias *.mycdn.com
	ServerAdmin webmaster@mycdn.com

	DocumentRoot /mirror/htdocs

	DirectoryIndex index.shtml index.html

	GeoIPEnable     On
	GeoIPScanProxyHeaders     On

	RewriteEngine     On

	RewriteCond %{HTTP_HOST} !americas.mycdn.com
	RewriteCond %{ENV:GEOIP_CONTINENT_CODE} NA|SA
	RewriteRule (.*) http://americas.mycdn.com$1 [R=permanent,L]

	RewriteCond %{HTTP_HOST} !apac.mycdn.com
	RewriteCond %{ENV:GEOIP_CONTINENT_CODE} AS|OC
	RewriteRule (.*) http://apac.mycdn.com$1 [R=permanent,L]

	RewriteCond %{HTTP_HOST} !europe.mycdn.com
	RewriteCond %{ENV:GEOIP_CONTINENT_CODE} EU|AF
	RewriteRule (.*) http://europe.mycdn.com$1 [R=permanent,L]

	<Directory />
		Order deny,allow
		Deny from all
		Options None
	</Directory>

	<Directory /mirror/htdocs>
		Order allow,deny
		Allow from all
		Options IncludesNoExec
	</Directory>
</VirtualHost>

Now ln -s /etc/apache2/sites-available/mirror.mycdn.com /etc/apache2/sites-enabled/ .

mkdir -p /mirror/htdocs to make your new document root and add a file called index.shtml there. The contents should look something like:

<html>
 <body>
  <h1>MyCDN Test Page</h1>
  <h2><!--#echo var="HTTP_HOST" --></h2>
<!--#set var="mirror_eu"       value="http://europe.mycdn.com/" -->
<!--#set var="mirror_apac"     value="http://apac.mycdn.com/" -->
<!--#set var="mirror_americas" value="http://americas.mycdn.com/" -->

<!--#if expr="${GEOIP_CONTINENT_CODE} == AF"-->
 <!--#set var="continent" value="Africa"-->
 <!--#set var="mirror" value="${mirror_eu}"-->

<!--#elif expr="${GEOIP_CONTINENT_CODE} == AS"-->
 <!--#set var="continent" value="Asia"-->
 <!--#set var="mirror" value="${mirror_apac}"-->

<!--#elif expr="${GEOIP_CONTINENT_CODE} == EU"-->
 <!--#set var="continent" value="Europe"-->
 <!--#set var="mirror" value="${mirror_eu}"-->

<!--#elif expr="${GEOIP_CONTINENT_CODE} == NA"-->
 <!--#set var="continent" value="North America"-->
 <!--#set var="mirror" value="${mirror_americas}"-->

<!--#elif expr="${GEOIP_CONTINENT_CODE} == OC"-->
 <!--#set var="continent" value="Oceania"-->
 <!--#set var="mirror" value="${mirror_apac}"-->

<!--#elif expr="${GEOIP_CONTINENT_CODE} == SA"-->
 <!--#set var="continent" value="South America"-->
 <!--#set var="mirror" value="${mirror_americas}"-->
<!--#endif -->
<!--#if expr="${GEOIP_CONTINENT_CODE}"-->
 <p>
  You appear to be in <!--#echo var="continent"-->.
  Your nearest mirror is <a href="<!--#echo var="mirror" -->"><!--#echo var="mirror" --></a>.
 </p>
 <p>
  Or choose from one of the following:
 </p>
<!--#else -->
 <p>
  Please choose your nearest mirror:
 </p>
<!--#endif -->

<ul>
 <li><a href="<!--#echo var="mirror_eu"       -->"><!--#echo var="mirror_eu"        --></a> Europe (London)</a></li>
 <li><a href="<!--#echo var="mirror_apac"     -->"><!--#echo var="mirror_apac"      --></a> Asia/Pacific (Tokyo)</a></li>
 <li><a href="<!--#echo var="mirror_americas" -->"><!--#echo var="mirror_americas"  --></a> USA (Fremont, CA)</a></li>
</ul>

<pre style="color:#ccc;font-size:smaller">
http-x-forwarded-for=<!--#echo var="HTTP_X_FORWARDED_FOR" -->
GEOIP_CONTINENT_CODE=<!--#echo var="GEOIP_CONTINENT_CODE" -->
</pre>
 </body>
</html>

Then apachectl restart to pick up the new virtualhost and visit each one of your NodeBalancer CNAMEs in turn. The ones which aren’t local to you should redirect you out to your nearest server.

Pretty neat! The last step is to add a user-facing A record, I used mirror.mycdn.com, and set it up to DNS-RR (Round-Robin) the addresses of the three NodeBalancers. Now Set up a cron job to rsync your content to the three target VPSes, or a script to push content on-demand. Job done!

For extra points:

  1. Clone another VPS behind each NodeBalancer so that each continent is fault tolerant, meaning you can reboot one VPS in each pair without losing continental service.
  2. Explore whether it’s safe to add the public IP of one Nodebalancer to the Host configuration of a NodeBalancer on another continent, effectively making a resilient loop.

Multivariate Charts from HTML tables in D3.js

For a dynamic monovariate (single line) chart, please see my earlier post – http://psyphi.net/blog/2013/04/charts-from-tables-with-d3js-and-jquery/.

Sometimes you just have to plot more than one dataset on the same chart, but you might have a complex data table with some “collections” of single-values and some collections of multiple values. Here I’ve put together an example from something I’ve been working on recently. Once your back-end queries (SQL or whatever) are written and your templates convert those data into basic HTML tables, you can plot then straight to SVG/D3 without much extra work.

Nearly all of that extra work is around adding appropriate classes to cells to distinguish columns and collections of columns. The rest is to extract those cells out again and decide which should be plotted together.

In this example, tabs and table headings belong to classes “collection_#” “a_c#” where the collection_# identifies a set of columns to be displayed together and the a_c# identifies the (links for the) columns themselves. Collections with multiple columns therefore have a single collection class but contain more than one a_c# class.

Next each table tbody td data cell belongs to a c# class, one for each column. Each one is also uniquely identified by a td#_<date> which allows hovers on the table cell to highlight the SVG data point and vice versa. Next each cell contains a span with a “val” class (more on that in the next post).

SVG paths may now be built for each column. Clicks on table-headings and tabs are able to examine which columns co-display because they belong in the same collection and then scale and plot them appropriately.

Note that the first and last tabs in this example plot single lines to demonstrate mixed collections in action. The middle two tabs have two lines each but there’s no reason why you couldn’t have more (although there are only seven colours listed at the moment).

Middleware and Monorails

Middleware, the point-and-click programmer’s equivalent of Perl. Middleware, yet another layer of largely unnecessary abstraction.

It’s two hours in to the three hour e-commerce process mapping meeting with three sets of consultants in the heart and heat of London with the traffic noise, police sirens and vaguely pleasant Hare Krishna chanting wafting in through the open windows of the meeting room. I’m minding my own business quietly in the corner, designing pipelines, data flows and object models, coding prolifically and generally doing much more useful things in the safe confines of my head when it happens… What’s your middleware? Hmm? What? Was that directed at me? Of course it was. Nuts.

Middleware, middleware, I’d better think fast. Not a term I use but I’m sure I can remember what it actually means. It must sit between things… Oh yes, it’s coming back to me now. Middleware, the point-and-click programmer’s equivalent of Perl. Middleware, yet another layer of largely unnecessary abstraction. An API for APIs. A tool for fooling developers into thinking that they’re not tightly coupling their applications together when instead they’re tightly coupling to a third-party system they have even less control over because they’re incapable of agreeing direct service communication specs with that other application. “It’s ok, it’s standards-based”. Sure it is. Whatever you say… Middleware is the consultants’ friend though – a clever sounding service that does the integration for you but generally requires little direct development and provides easy resale of the same work to multiple clients.

I’ve been developing large-scale, high traffic websites for a few years now and I’ve never had need for a component that specifically markets itself as middleware. Not once. I’ve used plenty of APIs, web services, object brokers, message queues, key-value stores and any other number of components with easily identifiable purposes but I think using middleware for middleware’s sake is still just a little too meta for me. Yessir! It’s a genuine, bona fide, API’d middleware. I see it absolutely as a Springfield Monorail application, but those Shelbyville folk are so much smarter, maybe I should be more like them.

What’s my middleware? None, I don’t have one. I don’t want one, I’m pretty sure I don’t need one. Cue surprised looks, amusement and disbelief.

Ok, twist my arm, I suppose I quite like Zapier. (Ahh, ok, he’s one of us after all)

Update: I tried to find a pretty picture to augment the post with but I couldn’t find anything on Google Images or Flickr that didn’t make me want to punch the screen.

Charts from Tables with D3js and jQuery

I’ve been tinkering with D3js on and off for a couple of months now, purely for generating simple, inline charts in web pages, made from data already dumped into HTML tables. Doing this is easier than building, caching and referencing external bitmap (PNG, GIF or whatever) images with Gnuplot or GD::Graph and also simpler than building bitmap images and serving them base64 encoded inline with <img alt=”” src=”data:…” />.

Using jQuery (or similar) to extract data from an already-present HTML table means there’s almost no code required whenever you want to add and plot a new column that someone might want to report on. Pushing all the work to the client should also mean slightly lighter server loads, though granted it’s already done the heavy lifting during the query to generate the table.

I’ve used examples from a number of sources, mostly from over on the d3js.org website itself and Mike Bostock’s inspiring example gallery. Plus the ever useful jQuery and jQueryUI libraries.

The result is a tabbed (with a jqueryui-themed unordered list) report based on a data table below. Clicking on either a tab or a table heading (all except the date) will animate and redraw the chart above. The data are collected using a jQuery selector on column classes in each.

Feel free to take and reuse it – just pinch the frame source.

Another use for Selenium IDE

A dear friend of mine recently needed to recover all email from his mailbox. Normally this wouldn’t be a problem, there are plenty of options in any sane mail application – export or archive mailbox, select-all messages and “Send Again”/Redirect/Bounce to another address or at the very worst, select-all and forward. Most of these options are available with desktop mail applications – Pine, Squirrelmail, IMP, Outlook, Outlook Express, Windows Mail, Mail.app, Thunderbird, Eudora and I’m sure loads of others.

Unfortunately the only access provided was through Microsoft’s Outlook Web Access (2007). This, whilst being fairly pretty in Lite (non-Internet Explorer browsers) mode and prettier/heavier in MSIE, does not have any useful bulk forwarding or export functionality at all. None. Not desperately handy, to be sure.

Ok, so my first port of call was to connect my Mail.app which supports Exchange OWA access. No dice – spinning, hanging, no data. Hmm – odd. Ok, second I tried fetchExc a Java commandline tool which promised everything I needed but in the end delivered pretty obtuse error messages. After an hour’s fiddling I gave up with fetchExc and tried falling back to Perl with Email::Folder::Exchange. This had very similar results to fetchExc but a slightly different set of errors.

After much swearing and a lot more poking, probing and requesting of tips from other friends (thanks Ze) the OWA service was also found to be sitting behind Microsoft’s Internet Security and Acceleration server. This isn’t a product I’ve used before but I can only assume it’s an expensive reverse proxy, the sort of thing I’d compare to inexpensive Apache + mod_proxy + mod_security on a good day. This ISA service happened to block all remote SOAP (2000/2003) and WebDAV (2007/2010) access too. Great! No remote service access whatsoever.

Brute force to the rescue. I could, of course go in and manually forward each and every last mail, but that’s quite tedious and a huge amount of clicking and pasting in the same email address. Enter Selenium IDE.

Selenium is a suite of tools for remote controlling browsers, primarily for writing tests for interactive applications. I use it in my day to day work mostly for checking bits of dynamic javascript, DHTML, forms etc. are doing the right things when clicked/pressed/dragged and generally interacted with. OWA is just a (really badly written) webpage to interact with, after all.

I downloaded the excellent sideflow.js plugin which provides loop functionality not usually required for web app testing and after a bit of DOM inspection on the OWA pages I came up with the following plan –

  • click the subject link
  • click the “forward” button
  • enter the recipient address
  • click the send button
  • select the checkbox
  • press the “delete” button
  • repeat 500 times

The macro looked something like this:

<table cellpadding="1" cellspacing="1" border="1">
<thead>
<tr><td rowspan="1" colspan="3">owa-selenium-macro</td></tr>
</thead><tbody>
<tr>
	<td>getEval</td>
	<td>index=0</td>
	<td></td>
</tr>
<tr>
	<td>while</td>
	<td>index&lt;500</td>
	<td></td>
</tr>
<tr>
	<td>storeEval</td>
	<td>index</td>
	<td>value</td>
</tr>
<tr>
	<td>echo</td>
	<td>${value}</td>
	<td></td>
</tr>
<tr>
	<td>clickAndWait</td>
	<td>//table[1]/tbody/tr[2]/td[3]/table/tbody/tr[2]/td/div/table//tbody/tr[3]/td[6]/h1/a</td>
	<td></td>
</tr>
<tr>
	<td>clickAndWait</td>
	<td>id=lnkHdrforward</td>
	<td></td>
</tr>
<tr>
	<td>type</td>
	<td>id=txtto</td>
	<td>newaddress@gmail.com</td>
</tr>
<tr>
	<td>clickAndWait</td>
	<td>id=lnkHdrsend</td>
	<td></td>
</tr>
<tr>
	<td>click</td>
	<td>name=chkmsg</td>
	<td></td>
</tr>
<tr>
	<td>clickAndWait</td>
	<td>id=lnkHdrdelete</td>
	<td></td>
</tr>
<tr>
	<td>getEval</td>
	<td>index++</td>
	<td></td>
</tr>
<tr>
	<td>endWhile</td>
	<td></td>
	<td></td>
</tr>
</tbody></table>

So I logged in, opened each folder in turn and replayed the macro in Selenium IDE as many times as I needed to. Bingo! Super kludgy but it worked well, was entertaining to watch and ultimately did the job.

Web Frameworking

It seems to be the wrong time to be reading such things, but over on InfoQ there’s a nice article introducing web development of RESTful services using Erlang and the Yaws high performance web server.

I say “the wrong time” as this week has kicked off the “Advancing with Rails” course by David A. Black of Ruby Power and Light fame. The course is fairly advanced in terms of required rails knowledge so it’s a bit of a baptism by fire for me and a few others having never written any Ruby before.

Rails is proving moderately easy to pick up but as I’ve remarked to a couple of people, it doesn’t seem any easier coding with Rails than with Perl. Perhaps it’s because I’ve never done it before but I reckon it’s a lot harder spending my time figuring out what the heck DHH meant something to do than it is doing it myself.

Even though it’s nowhere near as mature, I do reckon my ClearPress framework has a lot going for it – it’s pretty feature-complete in terms of ORM, views and templating ( TT2 ). It has similar convention over configuration features meaning it’s not designed for plugging in other alternative layers but it is absolutely possible to do (and I suspect without as much effort as is required in Rails). I still need to iron out some wrinkles in the autogenerated code from the application builder and provide some default authorisation and authentication mechanisms, some of which may come in the next release. But in the meantime it’s easy to add these features, which is exactly what we’ve done for the new sequencing run tracking app, NPG to tie it to the WTSI website single sign on (MySQL and LDAP under the hood).