Haplotype Consensus Clustering

Way back in the annals of history (2002) I wrote a bit of code to perform haplotype groupings for early Ensembl-linked data. Like my recent kmer scanner example, it used one of my favourite bits of Perl – the regex engine. I dug the old script out of an backup and it was, as you’d expect, completely horrible. So for fun I gave it a makeover this evening, in-between bits of Silent Witness.

This is what it looks like now. Down to 52 lines of code from 118 lines in the 2002 version. I guess the last 10 years have made me a little over twice as concise.

#!/usr/local/bin/perl -T
use strict;
use warnings;

#########
# read everything in
#
my $in = [];
while(my $line = <>) {
  chomp $line;
  if(!$line) {
    next;
  }

  #########
  # build regex pattern
  #
  $line =~ s{[X]}{.}smxg;

  #########
  # store
  #
  push @{$in}, uc $line;
}

my $consensii = {};

#########
# iterate over inputs
#
SEQ: for my $seq (sort { srt($a, $b) } @{$in}) {
  #########
  # iterate over consensus sequences so far
  #
  for my $con (sort { srt($a, $b) } keys %{$consensii}) {
    if($seq =~ /^$con$/smx ||
       $con =~ /^$seq$/smx) {
      #########
      # if input matches consensus, store & jump to next sequence
      #
      push @{$consensii->{$con}}, $seq;
      next SEQ;
    }
  }

  #########
  # if no match was found, create a new consensus container
  #
  $consensii->{$seq} = [$seq];
}

#########
# custom sort routine
# - firstly sort by sequence length
# - secondly sort by number of "."s (looseness)
#
sub srt {
  my ($x, $y) = @_;
  my $lx = length $x;
  my $ly = length $y;

  if($lx < $ly) {
    return -1;
  }
  if($ly > $lx) {
    return 1;
  }

  my $nx = $x =~ tr{.}{.};
  my $ny = $y =~ tr{.}{.};

  return $nx < => $ny;
}

#########
# tally and print everything out
#
while(my ($k, $v) = each %{$consensii}) {
  $k =~ s/[.]/X/sxmg;
  print $k, " [", (scalar @{$v}) ,"]\n";
  for my $m (@{$v}) {
    $m =~ s/[.]/X/sxmg;
    print "  $m\n";
  }
}

The input file looks something like this:

ACTGXTGC
ACTGATGC
ACTGTTGC
ACTGCTGC
ACTGGTGC
ACTXCTGC
ACXGCTGC
ACTGXTGC
CTGCTGC
CTGGTGC
CTXCTGC
CXGCTGC
CTGXTGC

ACTGACTGACTGACTGACTG
ACTGACTGACTGACTGACTG
ACTGXTGACTGACTG
ACTGACTGACTXACTG
ACXTGACTGACTGACTG

and the output looks a little like this – consensus [number of sequences] followed by an indented list of matching sequences:

elwood:~/dev rmp$ ./haplotype-sort < haplotype-in.txt 
ACTGXTGACTGACTG [1]
  ACTGXTGACTGACTG
ACTGATGC [1]
  ACTGATGC
CTGCTGC [4]
  CTGCTGC
  CTXCTGC
  CXGCTGC
  CTGXTGC
ACTGCTGC [5]
  ACTGCTGC
  ACTGXTGC
  ACTXCTGC
  ACXGCTGC
  ACTGXTGC
ACTGACTGACTGACTGACTG [2]
  ACTGACTGACTGACTGACTG
  ACTGACTGACTGACTGACTG
ACTGTTGC [1]
  ACTGTTGC
CTGGTGC [1]
  CTGGTGC
ACTGACTGACTXACTG [1]
  ACTGACTGACTXACTG
ACXTGACTGACTGACTG [1]
  ACXTGACTGACTGACTG
ACTGGTGC [1]
  ACTGGTGC

naïve kmer scanner

Another bit of fun, basically the opposite of yesterday’s post, here we’re detecting the number of unique kmers present in a sequence. It’s easy to do this with an iterating substr approach but I like Perl’s regex engine a lot so I wanted to do it using that. Okay, I wanted to do it entirely in one /e regex but it’s slightly trickier and a lot less clear manipulating pos inside a /e substitution function.

#!/usr/local/bin/perl
use strict;
use warnings;

my $str   = q[AAACAATAAGAAGCACCATCAGTACTATTAGGACGATGAGGCCCTCCGCTTCTGCGTCGGTTTGTGGG];
my $k     = 3;
my $match = q[\s*[ACTG]\s*]x$k;
my $seen  = {};

while($str =~ m{($match)}smxgi) {
  my $m = $1;
  $m    =~ s/\s*//smxg;

  $seen->{$m}++;

  pos $str = (pos $str) - $k + 1;
}

{
  local $, = "\n";
  print sort keys %{$seen};
}

printf "\n%d unique ${k}mers\n", scalar keys %{$seen};

$k is the size of the kmers we’re looking for. In this case 3, as we were generating yesterday.
$match attempts to take care of matches across newlines, roughly what one might find inside a FASTA. YMMV.
$seen keeps track of uniques we’ve encountered so far in $str.

The while loop iterates through matches found by the regex engine and pos, a function you don’t see too often, resets the start position for the next match, in this case to the current position minus 1 less than the length of the match (pos – k + 1).

The output looks something like this:


elwood:~/dev rmp$ ./kmers 
AAA
AAC
AAG
AAT
ACA
ACC
ACG
ACT
AGA
AGC
AGG
AGT
ATA
ATC
ATG
ATT
CAA
CAC
CAG
CAT
CCA
CCC
CCG
CCT
CGA
CGC
CGG
CGT
CTA
CTC
CTG
CTT
GAA
GAC
GAG
GAT
GCA
GCC
GCG
GCT
GGA
GGC
GGG
GGT
GTA
GTC
GTG
GTT
TAA
TAC
TAG
TAT
TCA
TCC
TCG
TCT
TGA
TGC
TGG
TGT
TTA
TTC
TTG
TTT
64 unique 3mers

If I were really keen I’d make use this in a regression test for yesterday’s toy.

naïve kmer sequence generator

This evening, for “fun”, I was tinkering with a couple of methods for generating sequences containing diverse, distinct, kmer subsequences. Here’s a small, unintelligent, brute-force function I came up with.
Its alphabet is set at the top in $bases, as is k, the required length of the distinct subsequences. It keeps going until it’s been able to hit all distinct combinations, tracked in the $seen hash. The final sequence ends up in $str.

#!/usr/local/bin/perl
use strict;
use warnings;

my $bases     = [qw(A C T G)];
my $k         = 3;
my $seen      = {};
my $str       = q[];
my $max_perms = (scalar @{$bases})**$k;
my $pos       = -1;

POS: while((scalar keys %{$seen}) < $max_perms) {
  $pos ++;
  report();

  for my $base (@{$bases}) {
    my $triple = sprintf q[%s%s],
                 (substr $str, $pos-($k-1), ($k-1)),
		 $base;
    if($pos < ($k-1) ||
       !$seen->{$triple}++) {
      $str .= $base;
      next POS;
    }
  }
  $str .= $bases->[-1];
}

sub report {
  print "len=@{[length $str]} seen @{[scalar keys %{$seen}]}/$max_perms kmers\n";
}

report();
print $str, "\n";

Executing for k=3, bases = ACTG the output looks like this:

elwood:~/dev rmp$ ./seqgen
len=0 seen 0/64 kmers
len=1 seen 0/64 kmers
len=2 seen 0/64 kmers
len=3 seen 1/64 kmers
len=4 seen 2/64 kmers
len=5 seen 3/64 kmers
len=6 seen 4/64 kmers
len=7 seen 5/64 kmers
len=8 seen 6/64 kmers
len=9 seen 7/64 kmers
len=10 seen 8/64 kmers
len=11 seen 9/64 kmers
len=12 seen 10/64 kmers
len=13 seen 10/64 kmers
len=14 seen 11/64 kmers
len=15 seen 12/64 kmers
len=16 seen 13/64 kmers
len=17 seen 14/64 kmers
len=18 seen 15/64 kmers
len=19 seen 16/64 kmers
len=20 seen 17/64 kmers
len=21 seen 18/64 kmers
len=22 seen 19/64 kmers
len=23 seen 20/64 kmers
len=24 seen 21/64 kmers
len=25 seen 22/64 kmers
len=26 seen 23/64 kmers
len=27 seen 24/64 kmers
len=28 seen 25/64 kmers
len=29 seen 26/64 kmers
len=30 seen 27/64 kmers
len=31 seen 28/64 kmers
len=32 seen 29/64 kmers
len=33 seen 30/64 kmers
len=34 seen 31/64 kmers
len=35 seen 32/64 kmers
len=36 seen 33/64 kmers
len=37 seen 34/64 kmers
len=38 seen 35/64 kmers
len=39 seen 36/64 kmers
len=40 seen 37/64 kmers
len=41 seen 37/64 kmers
len=42 seen 38/64 kmers
len=43 seen 39/64 kmers
len=44 seen 40/64 kmers
len=45 seen 41/64 kmers
len=46 seen 42/64 kmers
len=47 seen 43/64 kmers
len=48 seen 44/64 kmers
len=49 seen 45/64 kmers
len=50 seen 46/64 kmers
len=51 seen 47/64 kmers
len=52 seen 48/64 kmers
len=53 seen 49/64 kmers
len=54 seen 50/64 kmers
len=55 seen 51/64 kmers
len=56 seen 52/64 kmers
len=57 seen 53/64 kmers
len=58 seen 54/64 kmers
len=59 seen 55/64 kmers
len=60 seen 56/64 kmers
len=61 seen 57/64 kmers
len=62 seen 58/64 kmers
len=63 seen 59/64 kmers
len=64 seen 60/64 kmers
len=65 seen 61/64 kmers
len=66 seen 62/64 kmers
len=67 seen 63/64 kmers
len=68 seen 64/64 kmers
AAACAATAAGAAGCACCATCAGTACTATTAGGACGATGAGGCCCTCCGCTTCTGCGTCGGTTTGTGGG

As you can see, it manages to fit 64 distinct base triples in a string of only 68 characters. It could probably be packed a little more efficiently but I don’t think that’s too bad for a first attempt.

Bookmarks for February 8th through April 23rd

These are my links for February 8th through April 23rd:

Bookmarks for December 16th through January 11th

These are my links for December 16th through January 11th:

Bookmarks for October 27th through December 7th

These are my links for October 27th through December 7th:

5MHz Propagation Forecasts utilising the Experiment’s Database

Using historical data to predict future conditions

This article, written by Gwyn Williams G4FKH and published in RadCom December 2011 is © RSGB 2011, reproduced with permission.

FIGURE 1: An input form (see text).

BRIEF HISTORY

The RSGB 5MHz Experiment started toward the end of 2002, when the 5MHz Working Group was formed under the chairmanship of John Gould, G3WKL. The current NoVs for access to 5MHz expire on 30 June 2015, but it is possible that Ofcom and MoD will agree to a further extension. The 5MHz Experiment could then run well into the current solar cycle. On the website [1] are various ways in which information can be retrieved from the 5MHz database. In fact there are three separate parts to the database. The first two are concerned with the logging of the three propagation beacons that have been established: the first for manual logging and the second for automated logging. The third part is a compilation of propagation data for the same period. It suddenly occurred to me that the database could be used to forecast propagation.

 

BASIC CONCEPT

This article and the subsequent web application discuss the automatically logged data and the propagation data. The three beacons in questions are GB3RAL, GB3WES and GB3ORK situated in Maidenhead Locator squares, IO91in, IO84qn and IO89ja respectively. They transmit on a frequency of 5.290MHz every quarter of an hour for one minute each, starting on the hour. Each sequence therefore lasts for three minutes. The logging software was designed by Peter Martinez, G3PLX and is available on the web via the experiment’s main page [1]. To date, the automatic logging part of the database has accumulated over 1.3 million lines of data; the propagation part about one third of this. To perform the forecasts it is necessary to interrogate both of these datasets.
It will be appreciated that there are many stations contributing to the database and therefore there are an equal number of station setups! In order to explain the potential pitfall, I will explain my setup. I have a dipole aerial with a radio and PC. The output from the radio couples to the PC’s soundcard and the program manipulates data from the soundcard. My radio does not provide sufficient output to drive the soundcard, so I have installed an inline amplifier. I can, therefore, adjust my own recorded background/signal strength. Not taking care when analysing the data will cause these differences to become exaggerated. The G3PLX program compensates somewhat for this internally and displays the background signal level and the SNR of the station being monitored. This internal rationalisation and the forecasting program’s own scheme mitigate these differences.

I write small computer programs in the Perl language; during a fairly recent local club meeting I discussed this idea with Roger Pettett, G7TKI, a professional systems developer. My personal experience is not up to his standard and, as Roger kindly volunteered to develop the program in his own time, I quickly agreed. Roger is not au fait with propagation so the collaboration is a good one.

 

5MHz PROPAGATION

Before I go on to discuss the developed program and its usage, it seems expedient to first outline propagation at this frequency. Most of the propagation at this QRG is via NVIS (near vertical incidence skywave) communication. The E-layer (and probably, during some periods, the F-layer) do influence the longer distance circuits. NVIS for most practical purposes has a range of about 400km; therefore, if a station is set up in the middle of the UK, NVIS would be the dominant mode for the whole of the country. However, we do not live in an idealised world so a method is needed to perform forecasts between any two parts of the UK. To facilitate this database locators are used to correlate distances. The R12 (twelve-month smoothed relative sunspot number) and the AP (planetary A-index) are used to represent propagation conditions. R12 is automatically retrieved from the internet, but it is necessary to input the AP [2] that represents the propagation conditions under consideration. The best results within the UK will be achieved when using a 1⁄2λ dipole; the program suggests some aerial dimensions.
The ionosphere is fed by ionisation from the sun and this has a direct correlation with sunspot numbers. The ionosphere requires X amount of ionisation to allow for NVIS communication. For simplicity we can view the AP as a measure of disturbance in the ionosphere. The higher the AP, the worse the propagation will become.

Instead of using propagation prediction engines, eg VOACAP, we interrogate the databases to find parity (within parameters) to those conditions input by the user. A more complete introduction to NVIS communications can be found in [3].

5MHz FORECASTING PROGRAM

The completed program can be found at [4] and should be straightforward in use. As mentioned it is only necessary to input the AP, month, year and locator squares that are to be forecast, the last input tick is for a smoothed graph output. Figure 1 is an example of the input form. It can take a little time to search through the databases to find the correct matches; when the relevant data are found a table like Table 1 is produced along with a graph, Figure 2.

The table, which can be viewed as a median for a whole month, comprises three headed columns. The first column is the hour of the day, the second an SNR (signal to noise ratio) for the hour and the third an explanatory note for the SNR. For the purposes of this dataset an SNR below 20 should be considered as unreadable. The graph is a pictorial view of the table data; the thin green line is where the signal drops out of usability.
It may not be apparent why we need a tool such as the one described, because after all we have experience at this QRG and have other prediction tools at our left hand. What these things do not provide is the ability to visualise propagation when conditions are disturbed or changing. Inputting the correct answers into the input form allows the user to see what conditions will be like when the ionosphere becomes disturbed, such as from Coronal Mass Ejections and solar flares. This ability is lacking in prediction engines, but this program shows to what extent any degree of disturbance should have upon any circuit. However, there is one short-coming suffered by all forecasting tools and that is the ability to see things before they happen, ie a short- wave-fade from a large flare. This effect will last the same amount of time as the flare is visible from Earth (utilising the correct viewing techniques), from a minute or two, to perhaps an hour or so. These effects have a quick onset and an equally quick fall-off period.
Comparatively speaking and from interest the output from this program was compared with REC533, the same prediction engine used to produce the RadCom predictions (to be fair, predictions programs were never really designed for such short distances).
A survey was produced for RadCom in 2000 and REC533 was found to perform best. Comparing the output from REC533 (Table 2 Predicted SNR column) and what was actually copied using the automatic monitoring program (Table 2 Automatic Recording SNR column) showed that REC533 was only 25% accurate. When it is considered that this percentage of accuracy is only obtainable when geomagnetic conditions are quiet it shows a severe shortcoming.

Comparing this to the output from our new program (Table 2 Forecast SNR column) with an accuracy of 67% against actual monitoring, under all geomagnetic conditions, it shows the new program to be far superior to anything previously available. Comparison was carried out with the criteria of the SNR being within 10dB. In order to obtain more accurate results more data is required as well as data from a greater number of stations.
Table 3 and Figure 3 demonstrate the way in which conditions can change in reality. These clearly demonstrate the different conditions when an ionospheric disturbance is at hand. Of course it should be understood that diurnal conditions also have a bearing on the results shown in the table, ie at the very end and very beginning of the day, for some circuits no path is expected. What is happening is that the foF2 (F2 critical frequency) is dropping below the required frequency; this is when the E and F layers can accommodate communication.

 

SUMMARY

This simple but effective idea can prove useful in all sorts of circumstances, ie for everyday amateur use, emergency situations (RAYNET) and other occasions outside the amateur service where 5MHz is in use, especially when conditions become disturbed during ionospheric upheavals. It is the use of a database with all varieties of propagation conditions that is the winning ingredient in performing accurate forecasts. As the database builds up to a crescendo in 2015, a whole solar cycle and more of information will have been logged and will provide an unsurpassed amount of quality data for this frequency.

 

WEBSEARCH

[1] www.rsgb.org/spectrumforum/hf/5mhz.php
[2] www.swpc.noaa.gov/ftpdir/weekly/27DO.txt
[3] Near Vertical Incidence Skywave Communication, Theory, Techniques and Validation by LTC, David M Fiedler, (NJ ARNG) (Ret) and MAJ Edward J Farmer, PE (CA SMR). Published by Worldradio Books,
[4] www.rsgb-spectrumforum.org.uk/5mhzpf

This article may also be downloaded in PDF format here: 5MHz Propagation Forecasting utilising the Experiment’s Database

3 sorts of sort

I’ve been fiddling around recently with some stuff which I’m sure I covered in my CS degree 16 (Gah! Really?) years ago but have had to re-educate myself about. Namely a few different implementations of sort. I implemented three types in Perl with some reacquaintance of the mechanisms via Wikipedia. I found a few existing examples of sort algorithms in Perl but they generally looked a bit unpleasant so like all programmers I decided to write my own (complete with errors, as an exercise to the reader). Here I’ve also added some notes, mostly to myself, which are largely unsubstantiated because I haven’t measured memory, speed or recursion depth, for example (though these are well-documented elsewhere).

1. Bubble Sort

#!/usr/bin/perl -w
use strict;
use warnings;

my $set    = [map { int rand() * 99 } (0..40)];
print "in:  @{$set}\n";

my $sorted = bubblesort($set);
print "out: @{$sorted}\n";

sub bubblesort {
  my ($in)     = @_;
  my $out      = [@{$in}];
  my $length   = scalar @{$in};
  my $modified = 1;

  while($modified) {
    $modified = 0;
    for my $i (0..$length-2) {
      if($out->[$i] > $out->[$i+1]) {
	($out->[$i], $out->[$i+1]) = ($out->[$i+1], $out->[$i]);
	$modified = 1;
      }
    }
  }

  return $out;
}

Bubblesort iterates through each element of the list up to the last but one, comparing to the next element in the list. If it’s greater the values are swapped. The process repeats until no modifications are made to the list.

Pros: doesn’t use much memory – values are swapped in situ; doesn’t perform deep recursion; is easy to read

Cons: It’s pretty slow. The worst-case complexity is O(n2) passes (for each value in the list each value in the list is processed once).

2. Merge Sort

#!/usr/bin/perl
use strict;
use warnings;

my $set    = [map { int rand() * 99 } (0..40)];
print "in:  @{$set}\n";

my $sorted = mergesort($set);
print "out: @{$sorted}\n";

sub mergesort {
  my ($in) = @_;

  my $length = scalar @{$in};
  if($length < = 1) {
    return $in;
  }

  my $partition = $length / 2;
  my $left      = [@{$in}[0..$partition-1]];
  my $right     = [@{$in}[$partition..$length-1]];

  return merge(mergesort($left), mergesort($right));
}

sub merge {
  my ($left, $right) = @_;
  my $merge = [];

  while(scalar @{$left} || scalar @{$right}) {
    if(scalar @{$left} && scalar @{$right}) {
      if($left->[0] < $right->[0]) {
	push @{$merge}, shift @{$left};
      } else {
	push @{$merge}, shift @{$right};
      }
    } elsif(scalar @{$left}) {
      push @{$merge}, shift @{$left};
    } elsif(scalar @{$right}) {
      push @{$merge}, shift @{$right};
    }
  }
  return $merge;
}

Mergesort recurses through the list, in each iteration breaking the remaining list in half. Once broken down to individual elements, each pair of elements/lists at each depth of recursion is reconstituted into a new ordered list and returned.

Pros: generally quicker than bubblesort; O(n log n) complexity.

Cons: quite difficult to read

3. Quicksort

#!/usr/bin/perl
use strict;
use warnings;

my $set    = [map { int rand() * 99 } (0..40)];
print "in:  @{$set}\n";

my $sorted = quicksort($set);
print "out: @{$sorted}\n";

sub quicksort {
  my ($in) = @_;

  my $length = scalar @{$in};
  if($length < = 1) {
    return $in;
  }

  my $pivot = splice @{$in}, $length / 2, 1;
  my $left  = [];
  my $right = [];

  for my $v (@{$in}) {
    if($v < $pivot) {
      push @{$left}, $v;
    } else {
      push @{$right}, $v;
    }
  }

  return [@{quicksort($left)}, $pivot, @{quicksort($right)}];
}

Quicksort is probably the best known of all the sort algorithms out there. It’s easier to read than Mergesort, though arguably still not as easy as Bubblesort, but it’s a common pattern and its speed makes up for anything lacking in readability. At each iteration a pivot is selected and removed from the list. The remaining list is scanned and for element lower than the pivot is put in a new “left” list and each greater element is put into a new “right” list. The returned result is a merged recursive quicksort of the left list, the pivot and the right list.

In this example I’m picking the middle element of the list as the pivot. I’m sure there are entire branches of discrete mathematics dealing with how to choose the pivot based on the type of input data.

Pros: (One of?) the fastest sort algorithm(s) around; Reasonably efficient memory usage and recursion depth. Average O(n log n) complexity again (worst is O(n2)).

Perhaps it’s worth noting that in 25-odd years of programming computers I’ve only ever had to examine the inner workings of sort routines as part of my degree – never before, nor after, but it’s certainly brought back a few memories.

Neat Perl Gotcha

For a while now I’ve been using Test::Perl::Critic as an integral part of my daily coding. Test::Perl::Critic wraps Perl::Critic into a set of tests which can be automatically run against a distribution. Perl Critic implements Damien Conway’s set of standard Perl Best Practices.

I’m not going to go into the arguments of whether they’re good or bad rules right now. Suffice to say I use nearly all of them and my code has improved because of it. Anyway, one of the rules states you shouldn’t use parentheses for core method calls like length(), int() or rand(), so most of the time I don’t, but today I wanted to do this:

my @array = map { int rand * 45 } (1..10);

The results come out as an array of zeros. Why? Well it’s not a broken PRNG, that’s for sure. It’s a simple case of misunderstood parsing, operator precedence and Perl internals. Looking closely * 45 isn’t what you’d expect, *45 refers to the typeglob named 45 in the main package. I actually have no idea how this is cast into a number in Perl but however it’s done, it evaluates to zero. rand 0 seems to exhibit the same behaviour as rand 1, yielding a random float between zero and 1. int rand 0 will always be zero.

So? Well to stop the parser from taking *45 as an argument to rand you need to upset PerlCritic and add those parentheses back in:

my @array = map { int rand() *45 } (1..10);

and it works now so back to work!

[ The astute amongst you will realise you can just say int rand(45). I know, but that doesn’t work nearly so well as a mildly interesting example. I blame PerlCritic ;) ]