Remote Power Management using Arduino

2016-03-04 21.20.07

2016-03-07 Update: Git Repo available

Recently I’ve been involved with building a hardware device consisting of a cluster of low-power PC servers. The boards chosen for this particular project aren’t enterprise or embedded -style boards with specialist features like out of band (power) management (like Dell’s iDRAC or Intel’s AMT) so I started thinking about how to approximate something similar.

It’s also a little reminiscent of STONITH (Shoot The Other Node In The Head), used for aspects of the Linux-HA (High Availability) services.

I dug around in a box of goodies and found a couple of handy parts:

  1. Arduino Duemilanove
  2. Seeedstudio Arduino Relay Shield v3

The relays are rated for switching up to 35V at 8A – easily handling the 19V @ 2A for the mini server boards I’m remote managing.

The other handy thing to notice is that the Arduino by its nature is serial-enabled, meaning you can control it very simply using a USB connection to the management system without needing any more shields or adapters.

Lastly it’s worth mentioning that the relays are effectively SPDT switches so have connections for circuit open and closed. In my case this is useful as most of the time I don’t want the relays to be energised, saving power and prolonging the life of the relay.

The example Arduino code below opens a serial port and collects characters in a string variable until a carriage-return (0x0D) before acting, accepting commands “on”, “off” and “reset”. When a command is completed, the code clears the command buffer and flips voltages on the digital pins controlling the relays. Works a treat – all I need to do now is splice the power cables for the cluster compute units and run them through the right connectors on the relay boards. With the draw the cluster nodes pull being well within the specs of the relays it might even be possible to happily run two nodes through each relay.

There’s no reason why this sort of thing couldn’t be used for many other purposes too – home automation or other types of remote management, and could obviously be activated over ethernet, wifi or bluetooth instead of serial – goes without saying for a relay board -duh!

int MotorControl1 = 4;
int MotorControl2 = 5;
int MotorControl3 = 6;
int MotorControl4 = 7;
int incomingByte = 0; // for incoming serial data
String input = ""; // for command message

void action (String cmd) {
  if(cmd == "off") {
    digitalWrite(MotorControl1, HIGH); // NO1 + COM1
    digitalWrite(MotorControl2, HIGH); // NO2 + COM2
    digitalWrite(MotorControl3, HIGH); // NO3 + COM3
    digitalWrite(MotorControl4, HIGH); // NO4 + COM4
    return;
  }

  if(cmd == "on") {
    digitalWrite(MotorControl1, LOW); // NC1 + COM1
    digitalWrite(MotorControl2, LOW); // NC2 + COM2
    digitalWrite(MotorControl3, LOW); // NC3 + COM3
    digitalWrite(MotorControl4, LOW); // NC4 + COM4
    return;
  }

  if(cmd == "reset") {
    action("off");
    delay(1000);
    action("on");
    return;
  }

  Serial.println("unknown action");
}

// the setup routine runs once when you press reset:
void setup() {
  pinMode(MotorControl1, OUTPUT);
  pinMode(MotorControl2, OUTPUT);
  pinMode(MotorControl3, OUTPUT);
  pinMode(MotorControl4, OUTPUT);
  Serial.begin(9600); // opens serial port, sets data rate to 9600 bps
  Serial.println("relay controller v0.1 rmp@psyphi.net actions are on|off|reset");
  input = "";
} 

// the loop routine runs over and over again forever:
void loop() {
  if (Serial.available() > 0) {
    incomingByte = Serial.read();

    if(incomingByte == 0x0D) {
      Serial.println("action:" + input);
      action(input);
      input = "";
    } else {
      input.concat(char(incomingByte));
    }
  } else {
    delay(1000); // no need to go crazy
  }
}



Amazon Prime on Kodi for Slice

slice-boxI’m lucky enough to have both a Raspberry Pi “Slice” media player and an Amazon Prime account but it’s not supported right out of the box. Here’s how I was able to set it up today.

Requirements:

  1. A Slice
  2. An Amazon Prime account

Firstl make sure your Slice is correctly networked. Configuration is under Setup => OpenElec Settings.

Next you need to download a third-party add-on repository for Kodi. Download XLordKX Repo zip into a folder onto the Slice. I did this from another computer and copied it into a network share served from the Slice.

Now we can install the add-on. Setup => Add-on manager => Install from zip file. Then navigate to the file you downloaded and install it. Now Setup => Get Add-ons => XLordKX Repo => Video Add-ons => Amazon Prime Instant Video => Install

Now to configure Amazon Prime. Setup => Add-ons => Video Add-ons => Amazon Prime Instant Video.

I set mine to Website Version: UK and left everything else as defaults. Feed it your Amazon username & password and off you go.

The navigation is a little flakey which is a common Kodi/XBMC problem but the streaming seems fully functional – no problems on anything I’ve tried so far. I also see no reason why this wouldn’t work on raspbmc or openelec on a plain old Raspberry Pi. Happy streaming!

 

HT https://seo-michael.co.uk/tutorial-how-to-install-amazon-prime-instant-video-xbmc-kodi/ where I found instructions for Kodi in general.

Using the iPod Nano 6th gen with Ubuntu

440x330-ipod-nano6gen-frontToday I spent 3 hours wrestling with a secondhand ipod Nano, 6th gen (the “6” is the killer) for a friend, trying to make it work happily with Ubuntu.

Having never actually owned an iPod myself, only iPhone and iPad, it was a vaguely educational experience too. I found nearly no useful information on dozens of fora – all of them only reporting either “it works” without checking the generation, or “it doesn’t work” with no resolution, or “it should work” with no evidence. Yay Linux!

There were two issues to address – firstly making the iPod block storage device visible to Linux and secondly finding something to manage the unconventional media database on the iPod itself.

It turned out that most iPods, certainly early generations, work well with Linux but this one happened not to. Most iPods are supported via libgpod, whether you’re using Banshee, Rhythmbox, even Amarok (I think) and others. I had no luck with Rhythmbox, Banshee, gtkpod, or simple block storage access for synchronising music.

It also turns out that Spotify one of my other favourite music players doesn’t use libgpod, which looked very promising.

So the procedure I used to get this one to work went something like this:

  1. Restore and/or initialise the iPod using the standard procedure with iTunes (I used iTunes v10 and latest iPod firmware 1.2) on a Windows PC. Do not use iTunes on OSX. Using OSX results in the iPod being formatted using a not-well-supported filesystem (hfsplus with journalling). Using Windows results in a FAT filesystem (mounted as vfat under Linux).Having said that, I did have some success making the OSX-initialised device visible to Linux but it required editing fstab and adding:
    /dev/sdb2 /media/ipod hfsplus user,rw,noauto,force 0 0

    which is pretty stinky. FAT-based filesystems have been well supported for a long time – best to stick with that. Rhythmbox, the player I was trying at the time, also didn’t support the new media database. It appeared to copy files on but failed every time, complaining about unsupported/invalid database checksums. According to various fora the hashes need reverse engineering.

  2. Install the Ubuntu Spotify Preview using the Ubuntu deb (not the Wine version). I used the instructions here.
  3. I have a free Spotify account, which I’ve had for ages and might not be possible to make any more. I was worried that not having a premium or unlimited account wouldn’t let me use the iPod sync, but in the end it worked fine. The iPod was seen and available in Spotify straight away and allowed synchronisation of specific playlists or all “Local Files”. In the end as long as Spotify was running and the iPod connected, I could just copy files directly into my ~/Music/ folder and Spotify would sync it onto the iPod immediately.

Superb, job done! (I didn’t try syncing any pictures)

 

Thoughts on the WDTV Live Streaming Multimedia Player

A couple of weeks ago I had some Amazon credit to use and I picked up a Western Digital TV Live. I’ve been using it on and off since then and figured I’d jot down some thoughts.

Looks

Well how does it look? It’s small for starters, smaller than a double-CD case if you can remember those, around an inch deep. Probably a little larger than the Cyclone players although I don’t have any of those to compare with. It’s also very light indeed – not having a hard disk or power supply built in means the player itself can’t have much more than a motherboard in. I imagine the heaviest component is probably a power regulator heatsink or the case itself. It doesn’t sound like it has any fans in either which means there’s no audible running noise. I’ve wall-wart power bricks which make more running noise than this unit.

Mounting is performed using a couple of recesses on the back. I put a single screw into the VESA mount on the back of the kitchen TV and hung the WDTV from that. The infrared receiver seems pretty receptive just behind the top of the TV, facing upwards and the heaviest component to worry about is the HDMI or component AV cable – not a big deal at all.

Interface

The on-screen interface is pleasant and usable once you work your way around the icons and menus. The main screens – Music/Video/Services/Settings are easy enough but the functionality of the coloured menus isn’t too clear until you’ve either played around with them enough, or read the manual (haha). Associating to Wifi is a bit of a pain if you have a long WPA key as the soft keyboard isn’t too great. I did wonder if it’s possible to attach a USB keyboard just to enter passwords etc. but I didn’t try that out.

Connecting to NFS and SMB/CIFS shared drives is relatively easy. It helps if the shares are already configured to allow guest access or have a dedicated account for media players for example. The WDTV Live really wants read-write access for any shares you’re going to use permanently so it can generate its own indices. I like navigating folders and files rather than special device-specific libraries so I’m not particularly keen on this, but if it improves the multimedia experience so be it. I’ve enough multimedia devices in the house now, each with their own method of indexing that remembering which index folders from device A need to be ignored device B is becoming a bit of a nuisance. I haven’t had more than the usual set of problems with sending remote audio to the WDTV Live from a bunch of different Android devices, or using it as a Media Renderer from the DiskStation Audio Station app.

The remote control feels solid, with positive button actions and a responsive receiver. It’s laid out logically I guess, by which I mean it’s laid out in roughly the same way as most other video & multimedia remote controls I’ve used.

Firmware Updates

So normally I expect to buy some sort of gadget like this, use it for a couple of months, find a handful of bugs and never receive any firmware updates for it ever again. However I’ve been pleasantly surprised. In the two weeks I’ve had the WDTV I’ve had two firmware updates, one during the initial installation and the most recent in the last couple of days to address, amongst other things, slow frontend performance when background tasks are running (read “multimedia indexing on network shares” here). I briefly had a scan around the web to see if there was an XBMC port and there didn’t appear to be although there were some requests. I haven’t looked to see what CPU the WDTV has inside but it’s probably a low power ARM or Broadcom or similar so would take some effort to port XBMC to (from memory I seem to recall there is an ARM port in the works though). The regular firmware is downloadable and hackable however and there’s at least one unofficial version around.

Performance

Video playback has been smooth on everything I’ve tried. The videos I’ve played back have all been different formats, different container formats, different resolutions etc. and all streamed over 802.11G wifi and ethernet. I didn’t have any trouble with either type of networking so I haven’t checked to see whether the wired port is 100Mbps or 1GbE. I haven’t tried USB playback and there’s no SD card slot, which you might expect.

Audio playback is smooth although the interface took a little getting used to. I’ve been used to the XBMC and Synology DSAudio style of Queue/Play but this device always seems to queue+play which is actually what you want a lot of the time. I don’t have a digital audio receiver so I haven’t tried the SPDIF out.

Picture playback is acceptable but I found the transitions pretty jumpy, at least with 12 and 14Mpx images over wifi.

Conclusions

Overall I’m pretty happy with this device. It’s cheap, small, quiet and unobtrusive but packs a fair punch in terms of features. My biggest gripe is that it’s really slow doing its indexing. I thought the reason could have been because it was running over wifi but even after attaching it to a wired network it’s taken three days solid scanning our family snaps and home videos (a mix of still-camera video captures, miniDV transfers and HD camcorder). It doesn’t give you an idea of how far it’s progressed or how much is left to go so the only option seems to be to leave it and let it run. I did also have an initial problem where the WDTV didn’t detect it had HDMI plugged in, preferring to use the composite video out. Unscientifically, at the same time as I updated the firmware I reversed the cable so I don’t know quite what fixed it but it seems to have been fine since.

If I had to give an overall score for the WDTV Live, I’d probably say somewhere around 8/10.

 

Technostalgia

BBC Micro

Ahhhhh, Technostalgia. This evening I pulled out a box from the attic. It contained an instance of the first computer I ever used. A trusty BBC B+ Micro and a whole pile of mods to go with it. What a fabulous piece of kit. Robust workhorse, Econet local-area-networking built-in (but no modem, how forward-thinking!), and a plethora of expansion ports. My admiration of this hardware is difficult to quantify but I wasted years of my life learning how to hack about with it, both hardware and software.

The BBC Micro taught me in- and out- of the classroom. My primary school had one in each classroom and, though those might have been the ‘A’ or ‘B’ models, I distinctly remember one BBC Master somewhere in the school. Those weren’t networked but I remember spraining a thumb in the fourth year of primary school and being off sports for a few weeks. That’s when things really started happening. I taught myself procedural programming using LOGO. I was 10 – a late starter compared to some. I remember one open-day the school borrowed (or dusted off) a turtle

BBC Buggy (Turtle)

Brilliant fun, drawing ridiculous spirograph-style patterns on vast sheets of paper.

When I moved up to secondary school my eyes were opened properly. The computer lab was pretty good too. Networked computers. Fancy that! A network printer and a network fileserver the size of a… not sure what to compare it with – it was a pretty unique form-factor – about a metre long, 3/4 metre wide and about 20cm deep from memory (but I was small back then). Weighed a tonne. A couple of 10- or 20MB Winchesters in it from what I recall. I still have the master key for it somewhere! My school was in Cambridge and had a couple of part-time IT teacher/administrators who seemed to be on loan from SJ Research. Our school was very lucky in that regard – we were used as a test-bed for a bunch of network things from SJ Research, as far as I know a relative of Acorn. Fantastic kit only occasionally let down by the single, core network cable slung overhead between two buildings.

My first experience of Email was using the BBC. We had an internal mail system *POST which was retired after a while, roughly when ARBS left the school I think. I wrote my own MTA back then too, but in BASIC – I must have been about 15 at the time. For internet mail the school had signed up to use something called Interspan which I later realised must have been some sort of bridge to Fidonet or similar.

Teletext Adapter

We even had a networked teletext server which, when working, downloaded teletext pages to the LAN and was able to serve them to anyone who requested them. The OWUKWW – One-way-UK-wide-web! The Music department had a Music 5000 Synth which ran a language called Ample. Goodness knows how many times we played Axel-F on that. Software/computer-programmable keyboard synth – amazing.

Around the same time I started coding in 6502 and wrote some blisteringly fast conversions of simple games I’d earlier written in BASIC. I used to spend days drawing out custom characters on 8×8 squared exercise books. I probably still have them somewhere, in another box in the attic.

6502 coprocessor

Up until this point I’d been without a computer at home. My parents invested in our first home computer. The Atari ST. GEM was quite a leap from the BBC but I’d seen similar things using (I think) the additional co-processors – either the Z80- or the 6502 co-pro allowed you to run a sort of GEM desktop on the Beeb.

My memory is a bit hazy because then the school started throwing out the BBCs and bringing in the first Acorn Archimedes machines. Things of beauty! White, elegant, fast, hot, with a (still!) underappreciated operating system, high colour graphics, decent built-in audio and all sorts of other goodies. We had a Meteosat receiver hooked up to one in the geography department, pulling down WEFAX transmissions. I *still* haven’t got around to doing that at home, and I *still* want to!

Acorn A3000 Publicity Photo
Atari STE Turbo Pack

The ST failed pretty quickly and was replaced under warranty with an STE. Oh the horror – it was already incompatible with several games, but it had a Blitter chip ready to compete with those bloody Amiga zealots. Oh Babylon 5 was rendered on an Amiga. Sure, sure. But how many thousands of hit records had been written using Cubase or Steinberg on the Atari? MIDI – there was a thing. Most people now know MIDI as those annoying, never-quite-sounding-right music files which autoplay, unwarranted, on web pages where you can’t find the ‘mute’ button. Even that view is pretty dated.

Back then MIDI was a revolution. You could even network more than one Atari using it, as well as all your instruments of course. The STE was gradually treated to its fair share of upgrades – 4MB ram and a 100MB (SCSI, I think) hard disk, a “StereoBlaster” cartridge even gave it DSP capabilities for sampling. Awesome. I’m surprised it didn’t burn out from all the games my brothers and I played. I do remember wrecking *many* joysticks.

Like so many others I learned more assembler, 68000 this time, as I’d done with the BBC, by typing out pages and pages of code from books and magazines, spending weeks trying to find the bugs I’d introduced, checking and re-checking code until deciding the book had typos, but GFA Basic was our workhorse. My father had also started programming in GFA, and still did do until about 10 years ago when the Atari was retired.

Then University. First term, first few weeks of first term. I blew my entire student grant, £1400 back then, on my first PC. Pentium 75, 8MB RAM, a 1GB disk and, very important back then, a CD-ROM drive. A Multimedia PC!
It came with Windows 3.11 for Workgroups but with about 6 weeks of work was dual boot with my first Linux install. Slackware.

That one process, installing Slackware Linux with only one book “Que: Introduction to UNIX” probably taught me more about the practicalities of modern operating systems than my entire 3-year BSc in Computer Science (though to be fair, almost no theory of course). I remember shuttling hundreds of floppy disks between my room in halls and the department and/or university computer centre. I also remember the roughly 5% corruption rate and having to figure out the differences between my lack of understanding and buggered files. To be perfectly honest things haven’t changed a huge amount since then. It’s still a daily battle between understanding and buggered files. At least packaging has improved (apt; rpm remains a backwards step but that’s another story) but basically everything’s grown faster. At least these days the urge to stencil-spray-paint my PC case is weaker.

So – how many computers have helped me learn my trade? Well since about 1992 there have been five of significant import. The BBC Micro; the Acorn Archimedes A3000; the Atari ST(E); the Pentium 75 and my first Apple Mac G4 powerbook. And I salute all of them. If only computers today were designed and built with such love and craft. *sniff*.

Required Viewing:

  • Micro Men
  • The Pirates of Silicon Valley

Exa-, Peta-, Tera-scale Informatics: Are *YOU* in the cloud yet?

http://www.flickr.com/photos/pagedooley/2511369048/

One of the aspects of my job over the last few years, both at Sanger and now at Oxford Nanopore Technologies has been the management of tera-, verging on peta- scale data on a daily basis.

Various methods of handling filesystems this large have been around for a while now and I won’t go into them here. Building these filesystems is actually fairly straightforward as most of them are implemented as regular, repeatable units – great for horizontal scale-out.

No, what makes this a difficult problem isn’t the sheer volume of data, it’s the amount of churn. Churn can be defined as the rate at which new files are added and old files are removed.

To illustrate – when I left Sanger, if memory serves, we were generally recording around a terabyte of new data a day. The staging area there was around 0.5 Petabytes (using the Lustre filesystem) but didn’t balance correctly across the many disks. This meant we had to keep the utilised space below around 90% for fear of filling up an individual storage unit (and leading to unexpected errors). Ok, so that’s 450TB. That left 45 days of storage – one and a half months assuming no slack.

Fair enough. Sort of. collect the data onto the staging area, analyse it there and shift it off. Well, that’s easier said than done – you can shift it off onto slower, cheaper storage but that’s generally archival space so ideally you only keep raw data. If the raw data are too big then you keep the primary analysis and ditch the raw. But there’s a problem with that:

  • lots of clever people want to squeeze as much interesting stuff out of the raw data as possible using new algorithms.
  • They also keep finding things wrong with the primary analyses and so want to go back and reanalyse.
  • Added to that there are often problems with the primary analysis pipeline (bleeding-edge software bugs etc.).
  • That’s not mentioning the fact that nobody ever wants to delete anything

As there’s little or no slack in the system, very often people are too busy to look at their own data as soon as it’s analysed so it might sit there broken for a week or four. What happens then is there’s a scrum for compute-resources so they can analyse everything before the remaining 2-weeks of staging storage is up. Then even if there are problems found it can be too late to go back and reanalyse because there’s a shortage of space for new runs and stopping the instruments running because you’re out of space is a definite no-no!

What the heck? Organisationally this isn’t cool at all. Situations like this are only going to worsen! The technologies are improving all the time – run-times are increasing, read-lengths are increasing, base-quality is increasing, analysis is becoming better and more instruments are becoming available to more people who are using them for more things. That’s a many, many-fold increase in storage requirements.

So how to fix it? Well I can think of at least one pretty good way. Don’t invest in on-site long-term staging- or scratch-storage. If you’re worried by all means sort out an awesome backup system but nearline it or offline to a decent tape archive or something and absolutely do not allow user-access. Instead of long-term staging storage buy your company the fattest Internet pipe it can handle. Invest in connectivity, then simply invest in cloud storage. There are enough providers out there now to make this a competitive and interesting marketplace with opportunities for economies of scale.

What does this give you? Well, many benefits – here are a few:

  • virtually unlimited storage
  • only pay for what you use
  • accountable costs – know exactly how much each project needs to invest
  • managed by storage experts
  • flexible computing attached to storage on-demand
  • no additional power overheads
  • no additional space overheads

Most of those I more-or-less take for granted these days. The one I find interesting at the moment is the costing issue. It can be pretty hard to hold one centralised storage area accountable for different groups – they’ll often pitch in for proportion of the whole based on their estimated use compared to everyone else. With accountable storage offered by the cloud each group can manage and pay for their own space. The costs are transparent to them and the responsibility has been delegated away from central management. I think that’s an extremely attractive prospect!

The biggest argument I hear against cloud storage & computing is that your top secret, private data is in someone else’s hands. Aside from my general dislike of secret data, these days I still don’t believe this is a good argument. There are enough methods for handling encryption and private networking that this pretty-much becomes a non-issue. Encrypt the data on-site, store the keys in your own internal database, ship the data to the cloud and when you need to run analysis fetch the appropriate keys over an encrypted link, decode the data on demand, re-encrypt the results and ship them back. Sure the encryption overheads add expense to the operation but I think the costs are far outweighed.

Infrared Pen MkI

So, this evening, not wanting to spend more time on the computer (having been on it all day for day 2 of DB’s Rails course) I spent my time honing my long-unused soldering skills and constructing the first revision of my infrared marker pen for the JCL-special Wiimote Whiteboard.

The raw materials
Close-up of the LEDs Im removing
The finished article
Close-up of the switch detail
Activated under the IR-sensitive digital camera

I must say it’s turned out ok. I didn’t have any spare small switches so went for a bit of wire with enough springiness in it. On the opposite side of the makeshift switch is a retaining screw for holding the batteries in. I’m using two old AAA batteries (actually running about 2.4V according to the meter) and no resistor in series. The LED hasn’t burnt out yet!

To stop the pen switching on when not in use I slip a bit of electrical tape between the contacts. Obviously you can’t tell when it’s on unless you put in another, perhaps miniature, indicator visible LED.

It all fits together quite nicely though the retaining screw is too close for the batteries and has forced the back end out a bit – that’s easy to fix.

As I’m of course after multitouch I’ll be building the MkII pen soon with the other recovered LED!