That’s not my weather station…! Exploring 433MHz

Since I first played around with my first ADC on the BBC Micro in the early ’90s, I’ve always had a bit of a thing for data logging of one sort or another – when building data visualisations or just playing around with datasets it’s usually more fun working with data you’ve collected yourself.

So a few years ago I bought a little weather station to stick up in the garden, mostly for my wife, who’s the gardener, to keep an eye on the temperature, wind, humidity, etc. It has a remote sensor array in the garden running off a few AA batteries and transmitting wirelessly to a base station, with a display, which we keep in the kitchen. The base station also displays indoor temperature & pressure.

I discovered, more recently, that the sensor array transmits its data on 433MHz which is a license-free band for use by low power devices. At around the same time, I also discovered the cheap RTL-SDR repurposed DVB-T USB stick and eventually found my way over to the very convenient rtl_433 project.

Eventually, I wanted to try and build a datalogger for the weather station, but rather than periodically plugging the base station into something to offload the data it already captures, or leave something ugly plugged into the base station in the kitchen, I figured I’d configure a spare raspberry pi with rtl_433 and run it somewhere out of the way, so I duly went ahead and did that. It works really well and I’ve added a basic web UI which mimics the original base station display and combines it with data from elsewhere (like moon phases) and my intention is eventually to combine all sorts of other stuff like APT weather satellite imagery and maybe even sunspot activity for my other radio-based interests.

Even though the capture has been running permanently for at least a year now I’ve never really gone back to look at the data being logged, which was one of my original plans (temperature trend plots). Having a poke around the data this morning reminded me that actually there are lots of other things which broadcast on the same frequency and I wanted to share them here.

My logger logs to JSON format files by date, which makes it quite easy to munge the data with jq. The folder looks something like this:

Weather Station Datalogger folder listing

The contents of the files is nicely formatted and mostly looks like this

Weather Station logged data

Meaning I can batter my little raspberry pi and run a bit of jq over all the files:

cat 2020*json | \
jq --slurp 'group_by(.model)|map({model:.[0].model,count:length})'

That is to say: dump all of the 2020 JSON files, slurp them all into a big array in jq, group them into separate arrays by the “model” field, then transform those multiple arrays into just the name from the first element and a count of how many elements were in each array. Neat!

My poor little Pi didn’t like it very much. Each of those files has up to about 3000 records in and slurping the whole lot into memory leads to sad times.

Ok, so running one of the files alone ends up with a clean result

pi@wx:/data/wx $ jq --slurp 'group_by(.model)|map({model:.[0].model,count:length})' 2020-06-05.json
[
 {
  "model": "Ambient Weather F007TH Thermo-Hygrometer",
  "count": 1
 },
 {
  "model": "Citroen",
  "count": 1
 },
 {
  "model": "Elro-DB286A",
  "count": 2
 },
 {
  "model": "Fine Offset Electronics WH1080/WH3080 Weather Station",
  "count": 1956
 },
 {
  "model": "Ford",
  "count": 3
 },
 {
  "model": "Oregon Scientific SL109H",
  "count": 1
 },
 {
  "model": "Renault",
  "count": 3
 },
 {
  "model": "Schrader Electronics EG53MA4",
  "count": 1
 },
 {
  "model": "Smoke detector GS 558",
  "count": 2
 }
]

So mostly weather station data from (presumably!) my WH1080/WH3080 sensor array. The Oregon Scientific SL109H also looks like a weather station – I didn’t think my base station transmitted indoor temps, but I could be mistaken – will have to have a look. Someone else is also running a F007TH Hygrometer doing something similar, too. Citroen, Ford, Renault and Schrader are all tyre pressure sensors of neighbours and/or passing traffic. The Elro-DB286A is a neighbours wireless doorbell… that could be fun spoofing, and the GS558 is obviously a smoke detector, a lot less fun spoofing.

So, I can build tallies for each dated file like so:

for i in 2020*json; do
  cat $i \
  | jq --slurp 'group_by(.model)|map({model:.[0].model,count:length})|.[]';
done > /tmp/2020-tallies.json

Then sum the tallies like so:

cat /tmp/2020-tallies.json | \
jq -s '.|group_by(.model)|map({model:.[0].model,sum:map(.count)|add})'

The data includes a few more devices now, as one might expect. The frequency list looks like this (alphabetical rather than by frequency):

[
 {
  "model": "Acurite 609TXC Sensor",
  "sum": 1
 },
 {
  "model": "Acurite 986 Sensor",
  "sum": 4
 },
 {
  "model": "Akhan 100F14 remote keyless entry",
  "sum": 9
 },
 {
  "model": "Ambient Weather F007TH Thermo-Hygrometer",
  "sum": 13
 },
 {
  "model": "Cardin S466",
  "sum": 12
 },
 {
  "model": "Citroen",
  "sum": 1450
 },
 {
  "model": "Efergy e2 CT",
  "sum": 35
 },
 {
  "model": "Elro-DB286A",
  "sum": 134
 },
 {
  "model": "Fine Offset Electronics WH1080/WH3080 Weather Station",
  "sum": 375066
 },
 {
  "model": "Ford",
  "sum": 4979
 },
 {
  "model": "Ford Car Remote",
  "sum": 31
 },
 {
  "model": "Generic Remote",
  "sum": 28
 },
 {
  "model": "Honda Remote",
  "sum": 55
 },
 {
  "model": "Interlogix",
  "sum": 26
 },
 {
  "model": "LaCrosse TX141-Bv2 sensor",
  "sum": 47
 },
 {
  "model": "Oregon Scientific SL109H",
  "sum": 229
 },
 {
  "model": "Renault",
  "sum": 2334
 },
 {
  "model": "Schrader",
  "sum": 1566
 },
 {
  "model": "Schrader Electronics EG53MA4",
  "sum": 435
 },
 {
  "model": "Smoke detector GS 558",
  "sum": 155
 },
 {
  "model": "Springfield Temperature & Moisture",
  "sum": 2
 },
 {
  "model": "Thermopro TP11 Thermometer",
  "sum": 1
 },
 {
  "model": "Toyota",
  "sum": 474
 },
 {
  "model": "Waveman Switch Transmitter",
  "sum": 2
 }
]

Or like this as CSV:

pi@wx:/data/wx $ cat /tmp/2020-tallies.json | jq -rs '.|group_by(.model)|map({model:.[0].model,sum:map(.count)|add})|.[]|[.model,.sum]|@csv'
"Acurite 609TXC Sensor",1
"Acurite 986 Sensor",4
"Akhan 100F14 remote keyless entry",9
"Ambient Weather F007TH Thermo-Hygrometer",13
"Cardin S466",12
"Citroen",1450
"Efergy e2 CT",35
"Elro-DB286A",134
"Fine Offset Electronics WH1080/WH3080 Weather Station",375066
"Ford",4979
"Ford Car Remote",31
"Generic Remote",28
"Honda Remote",55
"Interlogix",26
"LaCrosse TX141-Bv2 sensor",47
"Oregon Scientific SL109H",229
"Renault",2334
"Schrader",1566
"Schrader Electronics EG53MA4",435
"Smoke detector GS 558",155
"Springfield Temperature & Moisture",2
"Thermopro TP11 Thermometer",1
"Toyota",474
"Waveman Switch Transmitter",2

Removing my weather station from the set, as it dwarfs everything else, the results look like this:

RTL433 Device type frequency in a rural neighbourhood

So aside from Ford having the most cars with tyre pressure monitors, it looks like there are a few other interesting devices to explore. Those car remotes don’t feel very secure to me, that’s for sure.

I’ve only just scratched the surface here, so if you’ve found anything interesting yourself with rtl_433, or want me to dig a bit deeper into some of the data I’ve captured here please let me know in the comments.

Signing MacOSX apps with Linux

Do you, like me, develop desktop applications for MacOSX? Do you, like me, do it on Linux because it makes for a much cheaper and easier to manage gitlab CI/CD build farm? Do you still sign your apps using a MacOSX machine, or worse (yes, like me), not sign them at all, leaving ugly popups like the one below?

With the impending trustpocalypse next month a lot of third-party (non-app-store) apps for MacOSX are going to start having deeper trust issues than they’ve had previously, no doubt meaning more, uglier popups than that one, or worse, not being able to run at all.

I suspect this trust-tightening issue, whilst arguably a relatively good thing to do to in the war against malware, will adversely affect a huge number of open-source Mac applications where the developer/s wish to provide Mac support for their users but may not wish to pay the annual Apple Developer tax even though it’s still relatively light, or may not even own any Apple hardware (though who knows how they do their integration testing?). In-particular this is likely to affect very many applications built with Electron or NWJS, into which group this post falls.

Well, this week I’ve been looking into this issue for one of the apps I look after, and I’m pleased to say it’s at a stage where I’m comfortable writing something about it. The limitation is that you don’t sidestep paying the Apple Developer tax, as you do still need valid certs with the Apple trust root. But you can sidestep paying for more Apple hardware than you need, i.e. nothing needed in the build farm.

First I should say all of the directions I used came from a 2016 article, here. Thanks very much to Allin Cottrell.

Below is the (slightly-edited) script now forming part of the build pipeline for my app. Hopefully the comments make it fairly self-explanatory. Before you say so, yes I’ve been lazy and haven’t parameterised directory and package names yet.

#!/bin/bash

#########
# This is a nwjs (node) project so fish the version out of package.json
#
VERSION=$(jq -r .version package.json)

#########
# set up the private key for signing, if present
#
rm -f key.pem
if [ "$APPLE_PRIVATE_KEY" != "" ]; then
    echo "$APPLE_PRIVATE_KEY" > key.pem
fi

#########
# temporary build folder/s for package construction
#
rm -rf build
mkdir build && cd build
mkdir -p flat/base.pkg flat/Resources/en.lproj
mkdir -p root/Applications;

#########
# stage the unsigned applicatio into the build folder
#
cp -pR "../dist/EPI2MEAgent/osx64/EPI2MEAgent.app" root/Applications/

#########
# fix a permissions issue which only manifests after following cpio stage
# nw.app seems to be built with owner-read only. no good when packaging as root
#
chmod go+r "root/Applications/EPI2MEAgent.app/Contents/Resources/app.nw"

#########
# pack the application payload
#
( cd root && find . | cpio -o --format odc --owner 0:80 | gzip -c ) > flat/base.pkg/Payload

#########
# calculate a few attributes
#
files=$(find root | wc -l)
bytes=$(du -b -s root | awk '{print $1}')
kbytes=$(( $bytes / 1000 ))

#########
# template the Installer PackageInfo
#
cat <<EOT > flat/base.pkg/PackageInfo
<pkg-info format-version="2" identifier="com.metrichor.agent.base.pkg" version="$VERSION" install-location="/" auth="root">
  <payload installKBytes="$kbytes" numberOfFiles="$files"/>
  <scripts>
    <postinstall file="./postinstall"/>
  </scripts>
  <bundle-version>
    <bundle id="com.metrichor.agent" CFBundleIdentifier="com.nw-builder.epimeagent" path="./Applications/EPI2MEAgent.app" CFBundleVersion="$VERSION"/>
  </bundle-version>
</pkg-info>
EOT

#########
# configure the optional post-install script with a popup dialog
#
mkdir -p scripts
cat <<EOT > scripts/postinstall
#!/bin/bash

osascript -e 'tell app "Finder" to activate'
osascript -e 'tell app "Finder" to display dialog "To get the most of EPI2ME please also explore the Nanopore Community https://community.nanoporetech.com/ ."'
EOT

chmod +x scripts/postinstall

#########
# pack the postinstall payload
#
( cd scripts && find . | cpio -o --format odc --owner 0:80 | gzip -c ) > flat/base.pkg/Scripts
mkbom -u 0 -g 80 root flat/base.pkg/Bom

#########
# Template the flat-package Distribution file together with a MacOS version check
#
cat <<EOT > flat/Distribution
<?xml version="1.0" encoding="utf-8"?>
<installer-script minSpecVersion="1.000000" authoringTool="com.apple.PackageMaker" authoringToolVersion="3.0.3" authoringToolBuild="174">
    <title>EPI2MEAgent $VERSION</title>
    <options customize="never" allow-external-scripts="no"/>
    <domains enable_anywhere="true"/>
    <installation-check script="pm_install_check();"/>
    <script>
function pm_install_check() {
  if(!(system.compareVersions(system.version.ProductVersion,'10.12') >= 0)) {
    my.result.title = 'Failure';
    my.result.message = 'You need at least Mac OS X 10.12 to install EPI2MEAgent.';
    my.result.type = 'Fatal';
    return false;
  }
  return true;
}
    </script>
    <choices-outline>
        <line choice="choice1"/>
    </choices-outline>
    <choice id="choice1" title="base">
        <pkg-ref id="com.metrichor.agent.base.pkg"/>
    </choice>
    <pkg-ref id="com.metrichor.agent.base.pkg" installKBytes="$kbytes" version="$VERSION" auth="Root">#base.pkg</pkg-ref>
</installer-script>
EOT

#########
# pack the Installer
#
( cd flat && xar --compression none -cf "../EPI2MEAgent $VERSION Installer.pkg" * )

#########
# check if we have a key for signing
#
if [ ! -f ../key.pem ]; then
    echo "not signing"
    exit
fi

#########
# calculate attribute
: | openssl dgst -sign ../key.pem -binary | wc -c > siglen.txt

#########
# xar the Installer package
#
xar --sign -f "EPI2MEAgent $VERSION Installer.pkg" \
    --digestinfo-to-sign digestinfo.dat --sig-size $(cat siglen.txt) \
    --cert-loc ../dist/tools/mac/certs/cert00 --cert-loc ../dist/tools/mac/certs/cert01 --cert-loc ../dist/tools/mac/certs/cert02

#########
# construct the signature
#
openssl rsautl -sign -inkey ../key.pem -in digestinfo.dat \
        -out signature.dat

#########
# add the signature to the installer
#
xar --inject-sig signature.dat -f "EPI2MEAgent $VERSION Installer.pkg"

#########
# clean up
#
rm -f signature.dat digestinfo.dat siglen.txt key.pem

With all that you still need a few assets. I built and published (internally) corresponding debs for xar v1.6.1 and bomutils 0.2. You might want to compile & install those from source – they’re pretty straightforward builds.

Next, you need a signing identity. I used XCode (Preferences => Accounts => Apple ID => Manage Certificates) to add a new Mac Installer Distribution certificate. Then used that to sign my .app once on MacOS in order to fish out the Apple cert chain (there are probably better ways to do this)

productsign --sign LJXXXXXX58 \
        build/EPI2MEAgent\ 2020.1.14\ Installer.pkg \
        EPI2MEAgent\ 2020.1.14\ Installer.pkg

Then fish out the certs

xar -f EPI2MEAgent\ 2020.1.14\ Installer.pkg \
        --extract-certs certs
mac:~/agent rmp$ ls -l certs/
total 24
-rw-r--r--  1 rmp  Users  1494 15 Jan 12:06 cert00
-rw-r--r--  1 rmp  Users  1062 15 Jan 12:06 cert01
-rw-r--r--  1 rmp  Users  1215 15 Jan 12:06 cert02

Next use Keychain to export the .p12 private key for the “3rd Party Mac Developer Installer” key. Then openssl it a bit to convert to a pem.

openssl pkcs12 -in certs.p12 -nodes | openssl rsa -out key.pem

I set this up the contents of key.pem as a gitlab CI/CD Environment Variable APPLE_PRIVATE_KEY so it’s never committed to the project source tree.

Once all that’s in place it should be possible to run the script (paths-permitting, obviously yours will be different) and end up with an installer looking something like this. Look for the closed padlock in the top-right, and the fully validated chain of certificate trust.

In conclusion, the cross-platform application nwjs builds (Mac, Windows, Linux) all run using nw-builder on ubuntu:18.04, and the Mac (and Windows, using osslsigncode, maybe more on that later) also all run on ubuntu:18.04. Meaning one docker image for the Linux-based Gitlab CI/CD build farm. Nice!

Proxy testing with IP Namespaces and GitLab CI/CD

CC-BY-NC https://www.flickr.com/photos/thomashawk/106559730

At work, I have a CLI tool I’ve been working on. It talks to the web and is used by customers all over the planet, some of them on networks with tighter restrictions than my own. Often those customers have an HTTP proxy of some sort and that means the CLI application needs to negotiate with it differently than it would directly with a web server.

So I need to test it somehow with a proxy environment. Installing a proxy service like Squid doesn’t sound like too big a deal but it needs to run in several configurations, at a very minimum these three:

  • no-proxy
  • authenticating HTTP proxy
  • non-authenticating HTTP proxy

I’m going to ignore HTTPS proxy for now as it’s not actually a common configuration for customers but I reckon it’s possible to do with mkcert or LetsEncrypt without too much work.

There are two other useful pieces of information to cover, firstly I use GitLab-CI to run the CI/CD test stages for the three proxy configurations in parallel. Secondly, and this is important, I must make sure that, once the test Squid proxy service is running, the web requests in the test only pass through the proxy and do not leak out of the GitLab runner. I can do this by using a really neat Linux feature called IP namespaces.

IP namespaces allow me to set up different network environments on the same machine, similar to IP subnets or AWS security groups. Then I can launch specific processes in those namespaces and network access from those processes will be limited by the configuration of the network namespace. That is to say, the Squid proxy can have full access but the test process can only talk to the proxy. Cool, right?

The GitLab CI/CD YAML looks like this (edited to protect the innocent)

stages:
- integration

.integration_common: &integration_common |
apt-get update
apt-get install -y iproute2

.network_ns: &network_ns |
ip netns add $namespace
ip link add v-eth1 type veth peer name v-peer1
ip link set v-peer1 netns $namespace
ip addr add 192.168.254.1/30 dev v-eth1
ip link set v-eth1 up
ip netns exec $namespace ip addr add 192.168.254.2/30 dev v-peer1
ip netns exec $namespace ip link set v-peer1 up
ip netns exec $namespace ip link set lo up
ip netns exec $namespace ip route add default via 192.168.254.1

noproxynoauth-cli:
image: ubuntu:18.04
stage: integration
script:
- *integration_common
- test/end2end/cli

proxyauth-cli:
image: ubuntu:18.04
stage: integration
script:
- *integration_common
- apt-get install -y squid apache2-utils
- mkdir -p /etc/squid3
- htpasswd -cb /etc/squid3/passwords testuser testpass
- *network_ns
- squid3 -f test/end2end/conf/squid.conf.auth && sleep 1 || tail -20 /var/log/syslog | grep squid
- http_proxy=http://testuser:testpass@192.168.254.1:3128/ https_proxy=http://testuser:testpass@192.168.254.1:3128/ ip netns exec $namespace test/end2end/cli
- ip netns del $namespace || true
variables:
namespace: proxyauth

proxynoauth-cli:
image: ubuntu:18.04
stage: integration
script:
- *integration_common
- apt-get install -y squid
- *network_ns
- squid3 -f test/end2end/conf/squid.conf.noauth && sleep 1 || tail -20 /var/log/syslog | grep squid
- http_proxy=http://192.168.254.1:3128/ https_proxy=http://192.168.254.1:3128/ test/end2end/cli
- ip netns del $namespace || true
variables:
namespace: proxynoauth

So there are five blocks here, with three stages and two common script blocks. The first common script block installs iproute2 which gives us the ip command.

The second script block is where the magic happens. It configures a virtual, routed subnet in the parameterised $namespace.

Following that we have the three test stages corresponding to the three proxy (or not) configurations I listed earlier. Two of them install Squid, one of those creates a test user for authenticating with the proxy. They all run the test script, which in this case is test/end2end/cli. When those three configs are modularised and out like this with the common net namespace script as well it provides a good deal of clarity to the test maintainer. I like it a lot.

So then the last remaining things are the respective squid configurations: proxyauth and proxynoauth. There’s a little bit more junk in these than there needs to be as they’re taken from the stock examples, but they look something like this:

 visible_hostname proxynoauth
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3128

and for authentication:

 visible_hostname proxyauth
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager

auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid3/passwords
auth_param basic realm proxy
acl authenticated proxy_auth REQUIRED

http_access allow authenticated
http_access deny all
http_port 3128

And there you have it – network-restricted proxy testing with different proxy configurations. It’s the first time I’ve used ip net ns without being wrapped up in Docker, LXC, containerd or some other libvirt thing, but the feeling of power from my new-found network-god skills is quite something :)

Be aware that you might need to choose different subnet ranges if your regular LAN conflicts. Please let me know in the comments if you find this useful or if you had to modify things to work in your environment.

Remote Power Management using Arduino

2016-03-04 21.20.07

2016-03-07 Update: Git Repo available

Recently I’ve been involved with building a hardware device consisting of a cluster of low-power PC servers. The boards chosen for this particular project aren’t enterprise or embedded -style boards with specialist features like out of band (power) management (like Dell’s iDRAC or Intel’s AMT) so I started thinking about how to approximate something similar.

It’s also a little reminiscent of STONITH (Shoot The Other Node In The Head), used for aspects of the Linux-HA (High Availability) services.

I dug around in a box of goodies and found a couple of handy parts:

  1. Arduino Duemilanove
  2. Seeedstudio Arduino Relay Shield v3

The relays are rated for switching up to 35V at 8A – easily handling the 19V @ 2A for the mini server boards I’m remote managing.

The other handy thing to notice is that the Arduino by its nature is serial-enabled, meaning you can control it very simply using a USB connection to the management system without needing any more shields or adapters.

Lastly it’s worth mentioning that the relays are effectively SPDT switches so have connections for circuit open and closed. In my case this is useful as most of the time I don’t want the relays to be energised, saving power and prolonging the life of the relay.

The example Arduino code below opens a serial port and collects characters in a string variable until a carriage-return (0x0D) before acting, accepting commands “on”, “off” and “reset”. When a command is completed, the code clears the command buffer and flips voltages on the digital pins controlling the relays. Works a treat – all I need to do now is splice the power cables for the cluster compute units and run them through the right connectors on the relay boards. With the draw the cluster nodes pull being well within the specs of the relays it might even be possible to happily run two nodes through each relay.

There’s no reason why this sort of thing couldn’t be used for many other purposes too – home automation or other types of remote management, and could obviously be activated over ethernet, wifi or bluetooth instead of serial – goes without saying for a relay board -duh!

int MotorControl1 = 4;
int MotorControl2 = 5;
int MotorControl3 = 6;
int MotorControl4 = 7;
int incomingByte = 0; // for incoming serial data
String input = ""; // for command message

void action (String cmd) {
  if(cmd == "off") {
    digitalWrite(MotorControl1, HIGH); // NO1 + COM1
    digitalWrite(MotorControl2, HIGH); // NO2 + COM2
    digitalWrite(MotorControl3, HIGH); // NO3 + COM3
    digitalWrite(MotorControl4, HIGH); // NO4 + COM4
    return;
  }

  if(cmd == "on") {
    digitalWrite(MotorControl1, LOW); // NC1 + COM1
    digitalWrite(MotorControl2, LOW); // NC2 + COM2
    digitalWrite(MotorControl3, LOW); // NC3 + COM3
    digitalWrite(MotorControl4, LOW); // NC4 + COM4
    return;
  }

  if(cmd == "reset") {
    action("off");
    delay(1000);
    action("on");
    return;
  }

  Serial.println("unknown action");
}

// the setup routine runs once when you press reset:
void setup() {
  pinMode(MotorControl1, OUTPUT);
  pinMode(MotorControl2, OUTPUT);
  pinMode(MotorControl3, OUTPUT);
  pinMode(MotorControl4, OUTPUT);
  Serial.begin(9600); // opens serial port, sets data rate to 9600 bps
  Serial.println("relay controller v0.1 rmp@psyphi.net actions are on|off|reset");
  input = "";
} 

// the loop routine runs over and over again forever:
void loop() {
  if (Serial.available() > 0) {
    incomingByte = Serial.read();

    if(incomingByte == 0x0D) {
      Serial.println("action:" + input);
      action(input);
      input = "";
    } else {
      input.concat(char(incomingByte));
    }
  } else {
    delay(1000); // no need to go crazy
  }
}



Amazon Prime on Kodi for Slice

slice-boxI’m lucky enough to have both a Raspberry Pi “Slice” media player and an Amazon Prime account but it’s not supported right out of the box. Here’s how I was able to set it up today.

Requirements:

  1. A Slice
  2. An Amazon Prime account

Firstl make sure your Slice is correctly networked. Configuration is under Setup => OpenElec Settings.

Next you need to download a third-party add-on repository for Kodi. Download XLordKX Repo zip into a folder onto the Slice. I did this from another computer and copied it into a network share served from the Slice.

Now we can install the add-on. Setup => Add-on manager => Install from zip file. Then navigate to the file you downloaded and install it. Now Setup => Get Add-ons => XLordKX Repo => Video Add-ons => Amazon Prime Instant Video => Install

Now to configure Amazon Prime. Setup => Add-ons => Video Add-ons => Amazon Prime Instant Video.

I set mine to Website Version: UK and left everything else as defaults. Feed it your Amazon username & password and off you go.

The navigation is a little flakey which is a common Kodi/XBMC problem but the streaming seems fully functional – no problems on anything I’ve tried so far. I also see no reason why this wouldn’t work on raspbmc or openelec on a plain old Raspberry Pi. Happy streaming!

 

HT https://seo-michael.co.uk/tutorial-how-to-install-amazon-prime-instant-video-xbmc-kodi/ where I found instructions for Kodi in general.

Apple Watch Adventures

2015-06-25 20.38.48Recently we’ve been exploring our customers’ user journeys, mapping out their touchpoints and reevaluating how we engage in the user experience of everything we do both digitally and in the physical world. Part of that requires the use of personas – model customers who in theory fulfil various different criteria in order to test the functionality and experiences of those digital touch points. I couldn’t help thinking about that and wondering which persona I might fit into in some nutjob’s head at Apple. Here are my first 12 hours’ experience with the much talked about Apple Watch.

As a bit of background: I’ve used work-owned Mac Laptops with OSX for 11 of the last 13 years but once made the mistake of spending my own money on an iPhone 3G, my first smartphone, which I hated more than I liked and swore never to buy another Apple device.

0900 Arrive at the office, coffee, email.

1000 Done responding to email for now. Time to look at what’s in the new box on my desk this morning. Ooh an Apple Watch. Great!

It’s heavy. Really heavy. The box is heavy, the plastic case is heavy, the magnetic charger is heavy. I think someone told Apple Heavy = Quality or something. I requested the smaller 38mm watch as my wrists are pretty thin and I didn’t want it to look ridiculous. The watch is small in width and height but it’s heavy. And fat – looks like a toy of some sort. Heavy, clunky, ugly. The leather strap feels it’s made of the same foam as my childrens’ play mats. The buckle is flabby and horrible. The face is already covered in fingerprints – I thought these things were oleophobic.

At least it came charged though, mostly because my IT Support team wanted a laugh and took it out of the box to play with earlier.

1030 meetings until 1200.

1200 Turn it on. It asks me to choose English (UK) as my preferred language fifteen times for some unknown reason.

1210 Meetings until 1400.

1400 I know it’s an Apple device and they’re very well known for being “open”. Not. Is there any way in the known universe to make it pair with my S6 Edge?
Read some webpages.

1420 Anticipate pairing compatibility answer. Scrounge an iPhone 5S from my IT Support team.

1430 meetings until 1800

1800 IT Support team managed to locate previous iPhone owner to deactivate account locks & device security so it can be reused.

1820 catch a lift home

2010 get home, cold dinner.

2030 No it doesn’t pair, but found a video of a guy who managed to make it run OS 7. Neat, I wonder if it could run Android. Read stupid Mashable articles for a bit.

2045 Try and set up the iPhone. Needs Wifi. Try to type in my long WPA code using the soft keyboard. Three attempts before typing it right – keyboard is noticeably less responsive than the S6 as well as being much smaller and non-Swype (yeah yeah, non-security-compromised, haha).

2050 Past the wifi setup screen. Yes!

Won’t proceed without a SIM. Full Fiscal Shambles! I can register a Galaxy without a SIM. Why must I have one for an iPhone? What happens if I use a SIM from something else? Is it locked somehow?

2055 No idea. Let’s try. Extract the SIM from my old S3. It’s a mini SIM. Too big. Don’t really want to cut it down as it’s already cut down from a fullsize one and I won’t be able to put it back in my S3.

2058 I wonder if I have something other than my S6 has a micro sim. Losing the will to live.

2100 Look for iPad everywhere in the house.

2110 Realise child has pinched iPad to play Clash of Clans and hidden it somewhere. Look for child.

2115 Location-aware child has left the house. Look for SIM extraction tool for S6. Find paperclip. Extract SIM.

2120 Insert SIM, complete iPhone setup.

2122 Complete watch setup. Manage to zoom in the app that tells the time. Can’t unzoom it. Didn’t read the instructions two seconds ago about how to unzoom. Can’t figure out what the magic pinch-press-zoom-standonhead combination is. I guess I need to hold Apple-Meta-Cmd or something.

2123 Press all the buttons at once and repeatedly in various combinations. Discover scrollwheel strafes across the display. Wow that’s a really horrible interaction.

2130 Have no content on iPhone to drive Watch applications.

2140 Get bored. Throw it in the bin. What a PoC.

2150 Realise we’re supposed to be writing Metrichor apps for it. Fish it out of the bin ready to give to the developers. Should set the project back a couple of months.

Update 2015-06-26
How could I forget? There is one thing I like – the UK plug with the retractable pins – finally! Sorry Samsung, only retracting one out of three pins doesn’t cut it.

Bookmarks for February 2nd through March 11th

These are my links for February 2nd through March 11th:

Bookmarks for December 2nd through January 12th

These are my links for December 2nd through January 12th:

Bookmarks for November 19th through December 2nd

These are my links for November 19th through December 2nd:

Bookmarks for August 29th through November 12th

These are my links for August 29th through November 12th: