These are my links for December 2nd through January 12th:
Tag: storage
Bookmarks for November 6th through November 26th
These are my links for November 6th through November 26th:
- The Hawksmoor Inheritance – Schools' crypto challenges
- WNDW Downloads – Wireless Networking in the Developing World, eBook
- GiantShark –
- ROSALIND | Problems – codeeval for bioinformatics
- s3backer – FUSE-based single file backing store via Amazon S3 – Google Project Hosting – S3-backed block devices
Bookmarks for June 1st through November 3rd
These are my links for June 1st through November 3rd:
- Meraki –
- Cambridge Wood Works – Recycling and Reusing Waste Wood In Cambridge UK –
- web-sorrow – a versatile security scanner for the information disclosure and fingerprinting phases of pentesting. written in perl – Google Project Hosting –
- Shades | Software | Charcoal Design –
- Welcome – Fritzing –
- PLOTS Spectral Workbench: index –
- Cubify – Express Yourself in 3D – basic reprap with a bit more polish
- LPMT – Little Projection-Mapping Tool | "Projection-Mapping for the masses" –
- CircuitLab – online schematic editor & circuit simulator –
- Nava Whiteford – SGenomics Ltd –
- Scalable Flash Memory Array – Violin Memory Violin Memory –
- All The Cheat Sheets That A Web Developer Needs | Top Design Magazine – Web Design and Digital Content – via @jitsukerr . Missing bits of perl and apache other than rewrite
- hashcat – advanced password recovery –
Bookmarks for June 28th through July 19th
These are my links for June 28th through July 19th:
- OpenSignalMaps – Cell Phone Tower and Signal Heat Maps –
- vodafone – THC Wiki –
- d3.js –
- iSCSI Enterprise Target – how did i miss this?
- Your PasswordCard – 63,461 printed so far! – via @zestuart
Bookmarks for October 14th through October 27th
These are my links for October 14th through October 27th:
- http://codebutler.github.com/firesheep/ –
- Volpin Props: Budget Build Mini Vacuum-Former –
- IRODS:Data Grids, Digital Libraries, Persistent Archives, and Real-time Data Systems – IRODS –
- INSEAD – Global Executive MBA Programmes – GEMBA at a glance –
- http://markup.io/ – Awesome. I've been looking for this for a looong time!
- Enterprise Samba: samba-enterprise –
Bookmarks for October 7th through October 13th
These are my links for October 7th through October 13th:
- Sector/Sphere: High Performance Distributed Data Storage and Processing –
- pdf417_encode | freshmeat.net –
- London Perl Workshop –
- pwauth – Project Hosting on Google Code – use with mod_authnz_external for pam authentication e.g. svn+https+dav+authnz_external+pwauth+pam+winbind+active directory
- mod-auth-external – Project Hosting on Google Code –
Bookmarks for June 8th through June 14th
These are my links for June 8th through June 14th:
Bookmarks for March 9th through March 17th
These are my links for March 9th through March 17th:
- OpenCL Hello World Example –
- Introductory Tutorial to OpenCL™ –
- Mac Dev Center: OpenCL Programming Guide for Mac OS X: Basic Programming Sample –
- Mac Dev Center: OpenCL Programming Guide for Mac OS X: OpenCL on the Mac Platform –
- OpenInkpot – Replacement firmware for some ebook readers
- Quake-Catcher Network –
- wmarow’s disk & disk array calculator –
- UX London –
Exa-, Peta-, Tera-scale Informatics: Are *YOU* in the cloud yet?
One of the aspects of my job over the last few years, both at Sanger and now at Oxford Nanopore Technologies has been the management of tera-, verging on peta- scale data on a daily basis.
Various methods of handling filesystems this large have been around for a while now and I won’t go into them here. Building these filesystems is actually fairly straightforward as most of them are implemented as regular, repeatable units – great for horizontal scale-out.
No, what makes this a difficult problem isn’t the sheer volume of data, it’s the amount of churn. Churn can be defined as the rate at which new files are added and old files are removed.
To illustrate – when I left Sanger, if memory serves, we were generally recording around a terabyte of new data a day. The staging area there was around 0.5 Petabytes (using the Lustre filesystem) but didn’t balance correctly across the many disks. This meant we had to keep the utilised space below around 90% for fear of filling up an individual storage unit (and leading to unexpected errors). Ok, so that’s 450TB. That left 45 days of storage – one and a half months assuming no slack.
Fair enough. Sort of. collect the data onto the staging area, analyse it there and shift it off. Well, that’s easier said than done – you can shift it off onto slower, cheaper storage but that’s generally archival space so ideally you only keep raw data. If the raw data are too big then you keep the primary analysis and ditch the raw. But there’s a problem with that:
- lots of clever people want to squeeze as much interesting stuff out of the raw data as possible using new algorithms.
- They also keep finding things wrong with the primary analyses and so want to go back and reanalyse.
- Added to that there are often problems with the primary analysis pipeline (bleeding-edge software bugs etc.).
- That’s not mentioning the fact that nobody ever wants to delete anything
As there’s little or no slack in the system, very often people are too busy to look at their own data as soon as it’s analysed so it might sit there broken for a week or four. What happens then is there’s a scrum for compute-resources so they can analyse everything before the remaining 2-weeks of staging storage is up. Then even if there are problems found it can be too late to go back and reanalyse because there’s a shortage of space for new runs and stopping the instruments running because you’re out of space is a definite no-no!
What the heck? Organisationally this isn’t cool at all. Situations like this are only going to worsen! The technologies are improving all the time – run-times are increasing, read-lengths are increasing, base-quality is increasing, analysis is becoming better and more instruments are becoming available to more people who are using them for more things. That’s a many, many-fold increase in storage requirements.
So how to fix it? Well I can think of at least one pretty good way. Don’t invest in on-site long-term staging- or scratch-storage. If you’re worried by all means sort out an awesome backup system but nearline it or offline to a decent tape archive or something and absolutely do not allow user-access. Instead of long-term staging storage buy your company the fattest Internet pipe it can handle. Invest in connectivity, then simply invest in cloud storage. There are enough providers out there now to make this a competitive and interesting marketplace with opportunities for economies of scale.
What does this give you? Well, many benefits – here are a few:
- virtually unlimited storage
- only pay for what you use
- accountable costs – know exactly how much each project needs to invest
- managed by storage experts
- flexible computing attached to storage on-demand
- no additional power overheads
- no additional space overheads
Most of those I more-or-less take for granted these days. The one I find interesting at the moment is the costing issue. It can be pretty hard to hold one centralised storage area accountable for different groups – they’ll often pitch in for proportion of the whole based on their estimated use compared to everyone else. With accountable storage offered by the cloud each group can manage and pay for their own space. The costs are transparent to them and the responsibility has been delegated away from central management. I think that’s an extremely attractive prospect!
The biggest argument I hear against cloud storage & computing is that your top secret, private data is in someone else’s hands. Aside from my general dislike of secret data, these days I still don’t believe this is a good argument. There are enough methods for handling encryption and private networking that this pretty-much becomes a non-issue. Encrypt the data on-site, store the keys in your own internal database, ship the data to the cloud and when you need to run analysis fetch the appropriate keys over an encrypted link, decode the data on demand, re-encrypt the results and ship them back. Sure the encryption overheads add expense to the operation but I think the costs are far outweighed.