Fitting the GOTEK FlashFloppy

In previous episodes of the Atari 520STFM refurbishment it was cleaned and recapped. This instalment sees installation of a replacement floppy drive. The GOTEK, sometimes known as FlashFloppy (really the name of the firmware it runs) is a drop-in replacement for a 3.5″ floppy disk drive which has several benefits:

  • It takes a FAT-formatted USB stick
  • The computer sees it as a normal floppy drive
  • It can serve a lot of 720K or 1.44MB disk images from a 16GB memory stick
  • A tiny OLED screen makes disk selection pretty easy
  • It’s much faster than a floppy drive

The downsides are that you can no longer use your old floppy disks, and sadly it doesn’t make those nostalgia-inducing head-seek noises… at least my one doesn’t!

There are a couple of different GOTEK models, one old and one newer. The story goes that the price of the disk controller 10x’d so the makers changed it and the firmware wasn’t immediately compatible. My one, bought on fleabay UK, May 2024 has the newer chip.

ARTERY AT32F415 GOTEK disk controller

The drive chassis is approximately the same size as the floppy drive being removed, but this one was found to be fractionally shorter (approx 1cm) so the power cable would not reach.

Power cable fails to reach even at full-stretch

This was resolved by cutting approxiamtely 6cm of floppy power cable from a dead ATX power supply, and soldering four spare header pins left over from an ESP32 project.

A salvaged floppy power connector extension

Naturally shrink-wrap sleeving is never available to hand so some fine purple electrical tape had to do. Hot glue would probably work quite well to secure the header pins as well.

Next come the flyleads for the GOTEK’s rotary controller and OLED display. Not all GOTEKs come with an external control/display – most seem to have the display built in and many only have up/down buttons, not a rotary control at all. Given the drive is on the side of the STFM, the standard display isn’t visible most of the time which isn’t very practical, so the external module seems much more useful. The display needs careful positioning as redoing it later is a PITA.

A small knife was used to very gently pry open one of the slots in the top of the case in order to position the display properly when clipped in. This is necessary because the connector blocks don’t fit through the slot without a little extra encouragement.

Gentle encouragement. Don’t crack the case!

I took a moment to appreciate the colour-coding on the wires and the fact that the connectors on duplicate colours are alternately polarised meaning they cannot be connected incorrectly. That’s super helpful, but countered by the fact these one-pin blocks don’t make very solid mechanical contact, tending to fall out if you look at them wrong. Securing them using small spots of superglue seems to help.

Flyleads superglued into place

The excess wires are pushed through from the above and the controller/display module is positioned and clipped onto the top of the ST case such that the wires can’t be seen.

Rotary Controller and OLED breakout module

The drive itself has the USB socket very close to the old eject button surround moulding which interferes very slightly but in practice it doesn’t seem to affect USB connectivity. Unfortunately in this configuration, in order to allow the ribbon cable to reach, the drive is technically mounted upside-down.

Mounted GOTEK drive

With everything closed back up it’s quite a smart-looking solution. Pretending to be a floppy drive doesn’t remove the quirks of using floppy disks but it does make them easier to deal with.

Atari 520STFM pictured with Ultrasatan SD virtual hard disk and GOTEK virtual floppy drive

The firmware version shipped on the drive seems fine but it’s possible to flash updates using the FlashFloppy code and documentation here. All in all the GOTEK is pretty easy to fit aside from the extra power extension. I will almost certainly be fitting more in the future.

Atari STFM520 keyboard & case refurbishing

In part three of my ongoing nostalgiathon refurbishing an Atari STFM it’s time to clean up the keyboard and case.

The keyboard assembly before cleaning

Step one is to remove the keyboard from the chassis – very simple to remove the seven short screws from underneath to release the top-side of the case. The keyboard floats on the shielding and is connected via a delicate 8 pin (7 wired and one polarising blank) header connector. This keyboard wasn’t very grubby – I’ve definitely seen much worse. A little grime and some letraset lower case, plus the usual dust, fluff and crumbs in-between the keys.

Detail of using a keycap puller

Using a keycap puller makes very quick work of removing all the key caps without damaging the switches or support pillars. The Enter and Space keys also have metal stabilisation bars which can be carefully unclipped from the keyboard chassis. Be gentle with these bars – they’re attached to the keycaps using small plastic clips which are easy to bend and break.

Alphabet soup: keycaps taking a bath

All the keycaps soaked in warm water and washing up liquid. These were individually scrubbed with a soft washing up pad, which was enough to remove the grime and the letraset.

The keyboard assembly with all keycaps removed

The keyboard chassis with included light muck. This was wiped first with surface cleaning disinfectant wipes then with cotton-buds and isopropyl alcohol (IPA).

Rinsing the keycaps

After scrubbing, the water was changed and the key caps were rinsed.

Stabilisation bars and keycaps drying

Keycaps were left to dry on kitchen towel. Also visible are the stabilisation bars for Enter and Space on the left, and one of the stabilisation support clips on the bottom.

Oxicleaned top case

Whilst the key caps were being cleaned, advantage was taken of a pleasant sunny afternoon. The top case was liberally sprayed with oxyclean peroxide spray (similar to Retrobright) and left in the sun for several hours, respraying and rotating every half hour or so. This can also be wrapped in clingfilm to reduce or avoid respraying.

Reassembled keyboard – looking clean!

All the keycaps were replaced using a reference photo taken before disassembly. The stabiliser pivots also had a pinhead of lithium grease applied. I imagine this is only really to reduce squeaking.

Reassembled STFM

Seeing everything reassembled in the case is very satisfying. The top case only suffered slight yellowing which has mostly cleared up now. I’ll have to try it again soon with my other STFM which is much worse.

Installing the Exxos 4MB RAM Atari STFM expansion

In the unlikely event you read my earlier post on recapping the Atari STFM power supply, you’ll know I recently acquired a good example of a mid-late revision Atari 520STFM. Now its PSU has been recapped and cleaned up, it’s time to have a crack at upgrading it from half a megabyte of RAM to 4MB, the most it can take in stock config.

There are several ways to perform this upgrade, from the difficult but reliable desolder all the current RAM chips, source and buy new compatible ones and resolder them, to piggybacking daughterboards of various types, heavily dependent on the motherboard revision in question.

C103253 rev.1 Atari STFM motherboard before expansion

My motherboard is a C103253 rev.1, as pictured so for this upgrade I opted for the Exxos “The LaST Upgrade” MMU piggyback with a stacking board which sits on the shifter chip and connects with a ribbon cable.

Opening up the shielding (centre of image above) revealed a socketed shifter. Apparently this isn’t always the case but it’ll do for me. The shifter chip can be gently pried out of its socket with a thin blade, then inserted into the shifter daughterboard, which I bought fully assembled. This can then be inserted back into the shifter socket, and that part is complete. Next time I do this I’ll consider buying the kit to construct, as it’s not a very complicated assembly.

The shielding doesn’t fit back over the stacked shifter now, which is flagged as an outcome in the documentation. I didn’t want to completely remove the shielding so I opted to bend it over backwards over the video modulator. It just fits now under the main case shielding when it goes back on, which is great, but it does now interfere with the floppy ribbon cable in particular. This makes it awkward to put the original floppy drive back in but might be sufficient with a GoTek as they look a little shorter than the original drive. I don’t have one to test-fit yet so I might need to revisit this shield later.

Next on to the MMU piggyback. The pitch of these pins is smaller and they look very delicate compared to the pins on the shifter for example. This daughterboard sits directly on top of the MMU – its retaining clip needs to be removed – and requires a disconcerting amount of pressure to seat it fully in the socket, as its pins are jammed in next to the socket pins. I chose to pull the motherboard out of the bottom case, seat the daughterboard and carefully push down onto it, and a desk using the palm of my hand and my weight. It felt extremely uncomfortable as I’ve never had to use that much force to seat a chip.

Lastly the old RAM still soldered onto the motherboard either needs to be removed, or disconnected. Doing the latter is much less work and can be reversed later if necessary. The 68ohm resistors R59, R60 and R61 need lifting to 5V. On this motherboard this means desoldering and pulling the right-hand-side legs, closest to the MMU then adding a jumper wire over to the +ve leg of the 4700µF capacitor adjacent on the motherboard.

Use solid core wire, not like I did here

4MB Atari STFM booted to GEM desktop

The result is a 4MB STFM (woowoo!) which boots to desktop and as yet has no way to run software because the flopy drive is dead and I haven’t formatted any SD cards for the ultrasatan yet (and will that even work with TOS 1.02?). Haha.

All parts were sourced from Exxos, with advice from the Atari ST and STe users FB group.

Installing the Exxos Atari ST PSU recap kit

I recently acquired a classic 16bit machine from my childhood in the form of a Motorola 68000-powered Atari 520STFM. Whilst it’s a later motherboard revision – C103253 REV.1 – it’s still a low-spec model with only 512MB RAM. The “F” in STFM is for the included 3.5″ floppy disk drive with the “M” being for the built-in TV modulator.

My hope is to upgrade this machine to the maximum of 4MB RAM and see which other add-ons (e.g. GoTek USB-stick floppy drive replacement; ultrasatan SD-card virtual hard drive; TOS upgrades; PiStorm accelerator) will best modernise the experience.

Atari 520STFM motherboard C103253 Rev.1

But first things first, I know enough to not turn it on in excitement – the most common fault with these older machines is failing electrolytic capacitors as the paste in them dries out, particularly in hotter environments like power supplies, so let’s have a look at the PSU… This model is a common Mitsumi SR98. We’re looking for bulging capacitor packages like this one.

A bulging electrolytic capacitor

The Exxos PSU refurbishment kit includes a replacement set of capacitors, a couple of replacement resistors and modern, more-efficient rectifier and low voltage schottky diode. This results in improved stability, improved ripple and lower temperatures. It’s also well within my soldering abilities!

The Exxos refurbishment kit, as it comes
Mitsumi SR98 PSU as it came, with replacement targets highlighted.

The fiddliest part is easily the rectifier as the new one is significantly larger and a different shape, but once it’s all done it looks something like the image below. A quick visual inspection underneath for bridged tracks and stray solder, maybe a quick clean with isopropanol and a toothbrush, and it’s ready to go.

The refurbished SR98 PSU, top side
Refurbished SR98 PSU, bottom side

The refurbished PSU is refitted carefully back into the case and reconnected to the low voltage header on the motherboard. Various parts of the PSU are mains live when turned on (danger of death!), so extreme care needs to be taken if the whole case isn’t reassembled. Also note that this PSU likes to be loaded – i.e. not to be run bare, so don’t turn it on without plugging something in (ideally a cheap bulb, rather than an expensive motherboard).

Using a multimeter I measured the voltage across the large 4700µF capacitor and trimmed VR201 down slightly to bring the voltage closer to 5.00V.

Now flipping the power switch results in a little green desktop and no magic smoke!

Little Green Desktop

This booted without a keyboard, mouse or floppy drive. I used an RGB SCART cable to an OSSC scan doubler (middle right), then HDMI to a regular modern monitor. The image in both low and medium resolutions is crisp and clear with very little hint of instability.

Next steps: cleaning the keyboard, retrobrighting the case, upgrading the TOS ROMS, fitting the 4MB RAM upgrade, Gotek and ultrasatan drives.

All the information I used for this PSU refurbishment was from the Exxos Forum.

Remote Power Management using Arduino

2016-03-04 21.20.07

2016-03-07 Update: Git Repo available

Recently I’ve been involved with building a hardware device consisting of a cluster of low-power PC servers. The boards chosen for this particular project aren’t enterprise or embedded -style boards with specialist features like out of band (power) management (like Dell’s iDRAC or Intel’s AMT) so I started thinking about how to approximate something similar.

It’s also a little reminiscent of STONITH (Shoot The Other Node In The Head), used for aspects of the Linux-HA (High Availability) services.

I dug around in a box of goodies and found a couple of handy parts:

  1. Arduino Duemilanove
  2. Seeedstudio Arduino Relay Shield v3

The relays are rated for switching up to 35V at 8A – easily handling the 19V @ 2A for the mini server boards I’m remote managing.

The other handy thing to notice is that the Arduino by its nature is serial-enabled, meaning you can control it very simply using a USB connection to the management system without needing any more shields or adapters.

Lastly it’s worth mentioning that the relays are effectively SPDT switches so have connections for circuit open and closed. In my case this is useful as most of the time I don’t want the relays to be energised, saving power and prolonging the life of the relay.

The example Arduino code below opens a serial port and collects characters in a string variable until a carriage-return (0x0D) before acting, accepting commands “on”, “off” and “reset”. When a command is completed, the code clears the command buffer and flips voltages on the digital pins controlling the relays. Works a treat – all I need to do now is splice the power cables for the cluster compute units and run them through the right connectors on the relay boards. With the draw the cluster nodes pull being well within the specs of the relays it might even be possible to happily run two nodes through each relay.

There’s no reason why this sort of thing couldn’t be used for many other purposes too – home automation or other types of remote management, and could obviously be activated over ethernet, wifi or bluetooth instead of serial – goes without saying for a relay board -duh!

int MotorControl1 = 4;
int MotorControl2 = 5;
int MotorControl3 = 6;
int MotorControl4 = 7;
int incomingByte = 0; // for incoming serial data
String input = ""; // for command message

void action (String cmd) {
  if(cmd == "off") {
    digitalWrite(MotorControl1, HIGH); // NO1 + COM1
    digitalWrite(MotorControl2, HIGH); // NO2 + COM2
    digitalWrite(MotorControl3, HIGH); // NO3 + COM3
    digitalWrite(MotorControl4, HIGH); // NO4 + COM4
    return;
  }

  if(cmd == "on") {
    digitalWrite(MotorControl1, LOW); // NC1 + COM1
    digitalWrite(MotorControl2, LOW); // NC2 + COM2
    digitalWrite(MotorControl3, LOW); // NC3 + COM3
    digitalWrite(MotorControl4, LOW); // NC4 + COM4
    return;
  }

  if(cmd == "reset") {
    action("off");
    delay(1000);
    action("on");
    return;
  }

  Serial.println("unknown action");
}

// the setup routine runs once when you press reset:
void setup() {
  pinMode(MotorControl1, OUTPUT);
  pinMode(MotorControl2, OUTPUT);
  pinMode(MotorControl3, OUTPUT);
  pinMode(MotorControl4, OUTPUT);
  Serial.begin(9600); // opens serial port, sets data rate to 9600 bps
  Serial.println("relay controller v0.1 rmp@psyphi.net actions are on|off|reset");
  input = "";
} 

// the loop routine runs over and over again forever:
void loop() {
  if (Serial.available() > 0) {
    incomingByte = Serial.read();

    if(incomingByte == 0x0D) {
      Serial.println("action:" + input);
      action(input);
      input = "";
    } else {
      input.concat(char(incomingByte));
    }
  } else {
    delay(1000); // no need to go crazy
  }
}



Amazon Prime on Kodi for Slice

slice-boxI’m lucky enough to have both a Raspberry Pi “Slice” media player and an Amazon Prime account but it’s not supported right out of the box. Here’s how I was able to set it up today.

Requirements:

  1. A Slice
  2. An Amazon Prime account

Firstl make sure your Slice is correctly networked. Configuration is under Setup => OpenElec Settings.

Next you need to download a third-party add-on repository for Kodi. Download XLordKX Repo zip into a folder onto the Slice. I did this from another computer and copied it into a network share served from the Slice.

Now we can install the add-on. Setup => Add-on manager => Install from zip file. Then navigate to the file you downloaded and install it. Now Setup => Get Add-ons => XLordKX Repo => Video Add-ons => Amazon Prime Instant Video => Install

Now to configure Amazon Prime. Setup => Add-ons => Video Add-ons => Amazon Prime Instant Video.

I set mine to Website Version: UK and left everything else as defaults. Feed it your Amazon username & password and off you go.

The navigation is a little flakey which is a common Kodi/XBMC problem but the streaming seems fully functional – no problems on anything I’ve tried so far. I also see no reason why this wouldn’t work on raspbmc or openelec on a plain old Raspberry Pi. Happy streaming!

 

HT https://seo-michael.co.uk/tutorial-how-to-install-amazon-prime-instant-video-xbmc-kodi/ where I found instructions for Kodi in general.

Using the iPod Nano 6th gen with Ubuntu

440x330-ipod-nano6gen-frontToday I spent 3 hours wrestling with a secondhand ipod Nano, 6th gen (the “6” is the killer) for a friend, trying to make it work happily with Ubuntu.

Having never actually owned an iPod myself, only iPhone and iPad, it was a vaguely educational experience too. I found nearly no useful information on dozens of fora – all of them only reporting either “it works” without checking the generation, or “it doesn’t work” with no resolution, or “it should work” with no evidence. Yay Linux!

There were two issues to address – firstly making the iPod block storage device visible to Linux and secondly finding something to manage the unconventional media database on the iPod itself.

It turned out that most iPods, certainly early generations, work well with Linux but this one happened not to. Most iPods are supported via libgpod, whether you’re using Banshee, Rhythmbox, even Amarok (I think) and others. I had no luck with Rhythmbox, Banshee, gtkpod, or simple block storage access for synchronising music.

It also turns out that Spotify one of my other favourite music players doesn’t use libgpod, which looked very promising.

So the procedure I used to get this one to work went something like this:

  1. Restore and/or initialise the iPod using the standard procedure with iTunes (I used iTunes v10 and latest iPod firmware 1.2) on a Windows PC. Do not use iTunes on OSX. Using OSX results in the iPod being formatted using a not-well-supported filesystem (hfsplus with journalling). Using Windows results in a FAT filesystem (mounted as vfat under Linux).Having said that, I did have some success making the OSX-initialised device visible to Linux but it required editing fstab and adding:
    /dev/sdb2 /media/ipod hfsplus user,rw,noauto,force 0 0

    which is pretty stinky. FAT-based filesystems have been well supported for a long time – best to stick with that. Rhythmbox, the player I was trying at the time, also didn’t support the new media database. It appeared to copy files on but failed every time, complaining about unsupported/invalid database checksums. According to various fora the hashes need reverse engineering.

  2. Install the Ubuntu Spotify Preview using the Ubuntu deb (not the Wine version). I used the instructions here.
  3. I have a free Spotify account, which I’ve had for ages and might not be possible to make any more. I was worried that not having a premium or unlimited account wouldn’t let me use the iPod sync, but in the end it worked fine. The iPod was seen and available in Spotify straight away and allowed synchronisation of specific playlists or all “Local Files”. In the end as long as Spotify was running and the iPod connected, I could just copy files directly into my ~/Music/ folder and Spotify would sync it onto the iPod immediately.

Superb, job done! (I didn’t try syncing any pictures)

 

Thoughts on the WDTV Live Streaming Multimedia Player

A couple of weeks ago I had some Amazon credit to use and I picked up a Western Digital TV Live. I’ve been using it on and off since then and figured I’d jot down some thoughts.

Looks

Well how does it look? It’s small for starters, smaller than a double-CD case if you can remember those, around an inch deep. Probably a little larger than the Cyclone players although I don’t have any of those to compare with. It’s also very light indeed – not having a hard disk or power supply built in means the player itself can’t have much more than a motherboard in. I imagine the heaviest component is probably a power regulator heatsink or the case itself. It doesn’t sound like it has any fans in either which means there’s no audible running noise. I’ve wall-wart power bricks which make more running noise than this unit.

Mounting is performed using a couple of recesses on the back. I put a single screw into the VESA mount on the back of the kitchen TV and hung the WDTV from that. The infrared receiver seems pretty receptive just behind the top of the TV, facing upwards and the heaviest component to worry about is the HDMI or component AV cable – not a big deal at all.

Interface

The on-screen interface is pleasant and usable once you work your way around the icons and menus. The main screens – Music/Video/Services/Settings are easy enough but the functionality of the coloured menus isn’t too clear until you’ve either played around with them enough, or read the manual (haha). Associating to Wifi is a bit of a pain if you have a long WPA key as the soft keyboard isn’t too great. I did wonder if it’s possible to attach a USB keyboard just to enter passwords etc. but I didn’t try that out.

Connecting to NFS and SMB/CIFS shared drives is relatively easy. It helps if the shares are already configured to allow guest access or have a dedicated account for media players for example. The WDTV Live really wants read-write access for any shares you’re going to use permanently so it can generate its own indices. I like navigating folders and files rather than special device-specific libraries so I’m not particularly keen on this, but if it improves the multimedia experience so be it. I’ve enough multimedia devices in the house now, each with their own method of indexing that remembering which index folders from device A need to be ignored device B is becoming a bit of a nuisance. I haven’t had more than the usual set of problems with sending remote audio to the WDTV Live from a bunch of different Android devices, or using it as a Media Renderer from the DiskStation Audio Station app.

The remote control feels solid, with positive button actions and a responsive receiver. It’s laid out logically I guess, by which I mean it’s laid out in roughly the same way as most other video & multimedia remote controls I’ve used.

Firmware Updates

So normally I expect to buy some sort of gadget like this, use it for a couple of months, find a handful of bugs and never receive any firmware updates for it ever again. However I’ve been pleasantly surprised. In the two weeks I’ve had the WDTV I’ve had two firmware updates, one during the initial installation and the most recent in the last couple of days to address, amongst other things, slow frontend performance when background tasks are running (read “multimedia indexing on network shares” here). I briefly had a scan around the web to see if there was an XBMC port and there didn’t appear to be although there were some requests. I haven’t looked to see what CPU the WDTV has inside but it’s probably a low power ARM or Broadcom or similar so would take some effort to port XBMC to (from memory I seem to recall there is an ARM port in the works though). The regular firmware is downloadable and hackable however and there’s at least one unofficial version around.

Performance

Video playback has been smooth on everything I’ve tried. The videos I’ve played back have all been different formats, different container formats, different resolutions etc. and all streamed over 802.11G wifi and ethernet. I didn’t have any trouble with either type of networking so I haven’t checked to see whether the wired port is 100Mbps or 1GbE. I haven’t tried USB playback and there’s no SD card slot, which you might expect.

Audio playback is smooth although the interface took a little getting used to. I’ve been used to the XBMC and Synology DSAudio style of Queue/Play but this device always seems to queue+play which is actually what you want a lot of the time. I don’t have a digital audio receiver so I haven’t tried the SPDIF out.

Picture playback is acceptable but I found the transitions pretty jumpy, at least with 12 and 14Mpx images over wifi.

Conclusions

Overall I’m pretty happy with this device. It’s cheap, small, quiet and unobtrusive but packs a fair punch in terms of features. My biggest gripe is that it’s really slow doing its indexing. I thought the reason could have been because it was running over wifi but even after attaching it to a wired network it’s taken three days solid scanning our family snaps and home videos (a mix of still-camera video captures, miniDV transfers and HD camcorder). It doesn’t give you an idea of how far it’s progressed or how much is left to go so the only option seems to be to leave it and let it run. I did also have an initial problem where the WDTV didn’t detect it had HDMI plugged in, preferring to use the composite video out. Unscientifically, at the same time as I updated the firmware I reversed the cable so I don’t know quite what fixed it but it seems to have been fine since.

If I had to give an overall score for the WDTV Live, I’d probably say somewhere around 8/10.

 

Technostalgia

BBC Micro

Ahhhhh, Technostalgia. This evening I pulled out a box from the attic. It contained an instance of the first computer I ever used. A trusty BBC B+ Micro and a whole pile of mods to go with it. What a fabulous piece of kit. Robust workhorse, Econet local-area-networking built-in (but no modem, how forward-thinking!), and a plethora of expansion ports. My admiration of this hardware is difficult to quantify but I wasted years of my life learning how to hack about with it, both hardware and software.

The BBC Micro taught me in- and out- of the classroom. My primary school had one in each classroom and, though those might have been the ‘A’ or ‘B’ models, I distinctly remember one BBC Master somewhere in the school. Those weren’t networked but I remember spraining a thumb in the fourth year of primary school and being off sports for a few weeks. That’s when things really started happening. I taught myself procedural programming using LOGO. I was 10 – a late starter compared to some. I remember one open-day the school borrowed (or dusted off) a turtle

BBC Buggy (Turtle)

Brilliant fun, drawing ridiculous spirograph-style patterns on vast sheets of paper.

When I moved up to secondary school my eyes were opened properly. The computer lab was pretty good too. Networked computers. Fancy that! A network printer and a network fileserver the size of a… not sure what to compare it with – it was a pretty unique form-factor – about a metre long, 3/4 metre wide and about 20cm deep from memory (but I was small back then). Weighed a tonne. A couple of 10- or 20MB Winchesters in it from what I recall. I still have the master key for it somewhere! My school was in Cambridge and had a couple of part-time IT teacher/administrators who seemed to be on loan from SJ Research. Our school was very lucky in that regard – we were used as a test-bed for a bunch of network things from SJ Research, as far as I know a relative of Acorn. Fantastic kit only occasionally let down by the single, core network cable slung overhead between two buildings.

My first experience of Email was using the BBC. We had an internal mail system *POST which was retired after a while, roughly when ARBS left the school I think. I wrote my own MTA back then too, but in BASIC – I must have been about 15 at the time. For internet mail the school had signed up to use something called Interspan which I later realised must have been some sort of bridge to Fidonet or similar.

Teletext Adapter

We even had a networked teletext server which, when working, downloaded teletext pages to the LAN and was able to serve them to anyone who requested them. The OWUKWW – One-way-UK-wide-web! The Music department had a Music 5000 Synth which ran a language called Ample. Goodness knows how many times we played Axel-F on that. Software/computer-programmable keyboard synth – amazing.

Around the same time I started coding in 6502 and wrote some blisteringly fast conversions of simple games I’d earlier written in BASIC. I used to spend days drawing out custom characters on 8×8 squared exercise books. I probably still have them somewhere, in another box in the attic.

6502 coprocessor

Up until this point I’d been without a computer at home. My parents invested in our first home computer. The Atari ST. GEM was quite a leap from the BBC but I’d seen similar things using (I think) the additional co-processors – either the Z80- or the 6502 co-pro allowed you to run a sort of GEM desktop on the Beeb.

My memory is a bit hazy because then the school started throwing out the BBCs and bringing in the first Acorn Archimedes machines. Things of beauty! White, elegant, fast, hot, with a (still!) underappreciated operating system, high colour graphics, decent built-in audio and all sorts of other goodies. We had a Meteosat receiver hooked up to one in the geography department, pulling down WEFAX transmissions. I *still* haven’t got around to doing that at home, and I *still* want to!

Acorn A3000 Publicity Photo

Atari STE Turbo Pack

The ST failed pretty quickly and was replaced under warranty with an STE. Oh the horror – it was already incompatible with several games, but it had a Blitter chip ready to compete with those bloody Amiga zealots. Oh Babylon 5 was rendered on an Amiga. Sure, sure. But how many thousands of hit records had been written using Cubase or Steinberg on the Atari? MIDI – there was a thing. Most people now know MIDI as those annoying, never-quite-sounding-right music files which autoplay, unwarranted, on web pages where you can’t find the ‘mute’ button. Even that view is pretty dated.

Back then MIDI was a revolution. You could even network more than one Atari using it, as well as all your instruments of course. The STE was gradually treated to its fair share of upgrades – 4MB ram and a 100MB (SCSI, I think) hard disk, a “StereoBlaster” cartridge even gave it DSP capabilities for sampling. Awesome. I’m surprised it didn’t burn out from all the games my brothers and I played. I do remember wrecking *many* joysticks.

Like so many others I learned more assembler, 68000 this time, as I’d done with the BBC, by typing out pages and pages of code from books and magazines, spending weeks trying to find the bugs I’d introduced, checking and re-checking code until deciding the book had typos, but GFA Basic was our workhorse. My father had also started programming in GFA, and still did do until about 10 years ago when the Atari was retired.

Then University. First term, first few weeks of first term. I blew my entire student grant, £1400 back then, on my first PC. Pentium 75, 8MB RAM, a 1GB disk and, very important back then, a CD-ROM drive. A Multimedia PC!
It came with Windows 3.11 for Workgroups but with about 6 weeks of work was dual boot with my first Linux install. Slackware.

That one process, installing Slackware Linux with only one book “Que: Introduction to UNIX” probably taught me more about the practicalities of modern operating systems than my entire 3-year BSc in Computer Science (though to be fair, almost no theory of course). I remember shuttling hundreds of floppy disks between my room in halls and the department and/or university computer centre. I also remember the roughly 5% corruption rate and having to figure out the differences between my lack of understanding and buggered files. To be perfectly honest things haven’t changed a huge amount since then. It’s still a daily battle between understanding and buggered files. At least packaging has improved (apt; rpm remains a backwards step but that’s another story) but basically everything’s grown faster. At least these days the urge to stencil-spray-paint my PC case is weaker.

So – how many computers have helped me learn my trade? Well since about 1992 there have been five of significant import. The BBC Micro; the Acorn Archimedes A3000; the Atari ST(E); the Pentium 75 and my first Apple Mac G4 powerbook. And I salute all of them. If only computers today were designed and built with such love and craft. *sniff*.

Required Viewing:

  • Micro Men
  • The Pirates of Silicon Valley

Exa-, Peta-, Tera-scale Informatics: Are *YOU* in the cloud yet?

http://www.flickr.com/photos/pagedooley/2511369048/

One of the aspects of my job over the last few years, both at Sanger and now at Oxford Nanopore Technologies has been the management of tera-, verging on peta- scale data on a daily basis.

Various methods of handling filesystems this large have been around for a while now and I won’t go into them here. Building these filesystems is actually fairly straightforward as most of them are implemented as regular, repeatable units – great for horizontal scale-out.

No, what makes this a difficult problem isn’t the sheer volume of data, it’s the amount of churn. Churn can be defined as the rate at which new files are added and old files are removed.

To illustrate – when I left Sanger, if memory serves, we were generally recording around a terabyte of new data a day. The staging area there was around 0.5 Petabytes (using the Lustre filesystem) but didn’t balance correctly across the many disks. This meant we had to keep the utilised space below around 90% for fear of filling up an individual storage unit (and leading to unexpected errors). Ok, so that’s 450TB. That left 45 days of storage – one and a half months assuming no slack.

Fair enough. Sort of. collect the data onto the staging area, analyse it there and shift it off. Well, that’s easier said than done – you can shift it off onto slower, cheaper storage but that’s generally archival space so ideally you only keep raw data. If the raw data are too big then you keep the primary analysis and ditch the raw. But there’s a problem with that:

  • lots of clever people want to squeeze as much interesting stuff out of the raw data as possible using new algorithms.
  • They also keep finding things wrong with the primary analyses and so want to go back and reanalyse.
  • Added to that there are often problems with the primary analysis pipeline (bleeding-edge software bugs etc.).
  • That’s not mentioning the fact that nobody ever wants to delete anything

As there’s little or no slack in the system, very often people are too busy to look at their own data as soon as it’s analysed so it might sit there broken for a week or four. What happens then is there’s a scrum for compute-resources so they can analyse everything before the remaining 2-weeks of staging storage is up. Then even if there are problems found it can be too late to go back and reanalyse because there’s a shortage of space for new runs and stopping the instruments running because you’re out of space is a definite no-no!

What the heck? Organisationally this isn’t cool at all. Situations like this are only going to worsen! The technologies are improving all the time – run-times are increasing, read-lengths are increasing, base-quality is increasing, analysis is becoming better and more instruments are becoming available to more people who are using them for more things. That’s a many, many-fold increase in storage requirements.

So how to fix it? Well I can think of at least one pretty good way. Don’t invest in on-site long-term staging- or scratch-storage. If you’re worried by all means sort out an awesome backup system but nearline it or offline to a decent tape archive or something and absolutely do not allow user-access. Instead of long-term staging storage buy your company the fattest Internet pipe it can handle. Invest in connectivity, then simply invest in cloud storage. There are enough providers out there now to make this a competitive and interesting marketplace with opportunities for economies of scale.

What does this give you? Well, many benefits – here are a few:

  • virtually unlimited storage
  • only pay for what you use
  • accountable costs – know exactly how much each project needs to invest
  • managed by storage experts
  • flexible computing attached to storage on-demand
  • no additional power overheads
  • no additional space overheads

Most of those I more-or-less take for granted these days. The one I find interesting at the moment is the costing issue. It can be pretty hard to hold one centralised storage area accountable for different groups – they’ll often pitch in for proportion of the whole based on their estimated use compared to everyone else. With accountable storage offered by the cloud each group can manage and pay for their own space. The costs are transparent to them and the responsibility has been delegated away from central management. I think that’s an extremely attractive prospect!

The biggest argument I hear against cloud storage & computing is that your top secret, private data is in someone else’s hands. Aside from my general dislike of secret data, these days I still don’t believe this is a good argument. There are enough methods for handling encryption and private networking that this pretty-much becomes a non-issue. Encrypt the data on-site, store the keys in your own internal database, ship the data to the cloud and when you need to run analysis fetch the appropriate keys over an encrypted link, decode the data on demand, re-encrypt the results and ship them back. Sure the encryption overheads add expense to the operation but I think the costs are far outweighed.