BenjaminBenBen @benjaminbenben

In With the Old 09 September 2015

After more than a year of no posts - I’m going to try harder to blog again

It’s been a really busy year; a lot of my spare time has been spent hacking on, or preparing for, talks. In the last year I’ve spoken at:

MK Geek Night, FFconf, Breaking Borders, All Your Base, Oxford Geek Night, jQueryUK, MK Geek Night, talk web design, Upfront, Full Stack Fest and Reasons to be creative.

Ben Foxall, by garrettc

photo: All Your Base, by Garrettc

It’s been pretty exhausting, though I’ve got a lot from it. I’ve been able to explore ideas with combining capabilities of lots of devices, how we can use IoT to inspire the way that we build applications, and recently I’ve been looking into our relationship with the data that we produce.

I’ve also been helping out with JSOxford quite a lot; it’s so awesome to see the community grow. This year we had our second Summer of Hacks, which had FIVE events (totally Ryans’ fault).

Next up

I’ve got a few blog posts ideas logged as github issues; so I’m going to try and start working through those. A bit of a warning; I’m going to focus on getting stuff written than making it super-great quality. If you see typos, mistakes, or bad grammar; that’s totally why, ok. (also, it would be ace if you could correct them.)

(most things I’m working on are in github issues: talks, projects and other stuff)

And also

Although I’ve not been posting much online; other people have been putting stuff online with my face on it:

Thanks, guys.

Open Inbox 09 July 2014

I’m pretty rubbish at responding to emails - I’ve tried inbox zero a few times but never stuck at it. I thought being public about the state of my inbox might encourage me to keep on top of things.



(key: orange - unread, yellow - read)

It’s updated every 30 minutes by a short python script that uses the new gmail api to fetch the inbox counts, then post them at a heroku app. The heroku app stores the data in Redis. The server is slightly hacked together, though the source is on github.

I guess you could say that it’s a slight invasion of my privacy - though I think the benefits of having it open far outweigh that. It forces to be honest about my inbox, helps other people see if I’m making progress, or spot patterns and help me.

If you are doing some more heavy duty time series monitoring, I’d check out influxDB - seems like a really powerful tool.

I couldn’t use it for this data, but redis-timeseries is a nice node library for storing event counts in redis (like website hits or other events).

LastFM to CSV 29 May 2014

I made lastfm to csv - a page for downloading lastfm listening data as a csv file. API requests are made directly from the browser, avoiding the need for any server-side code.

give it a shot now (my username is benjaminf)

My friend Andy wanted some lastfm listening data for a visualisation project so I wrote him a ruby script that uses the user.getrecenttracks method to get the track history for a user. This worked fine though if someone else wanted to use it - they would need to install ruby, and have a familiarity of the command line.

I figured it might be easier for people to do this in the browser - so I put together lastfm-to-csv, which lets you enter a username and download a csv file containing all the tracks for a user.


Using the browser

Generating a csv of all the tracks requires many requests to the api, these need to be processed and combined together. I decided to do this all from the browser, by using xhr to make requests to the api and storing the processed csv data in a javascript Blob.

A few libraries that I found useful:

  • reqwest - for making xhr calls, nice and lightweight.
  • async - for organising the calls to the api and processing the results.
  • Filesaver.js - lets you download the resulting Blob objects as a file.

A few things I noticed by doing it this way:

  • people have access to their data instantly - no waiting around for a file to be generated on the server before downloading it.
  • no servers to keep running - all code runs in the browser
  • harder to avoid api limiting - it someone gathers data in several windows, they could get rate limited.
  • potential access issues. Last fm gives you access to a nice cors endpoint, though that’s not always the case with other apis.

Andy blogged about his awesome last fm visualisations, you should check them out - turns out that I’m quite obsessive.

Unknown Pleasures 21 October 2013

The artwork for Joy Division's Unknown Pleasures album is based on a graph of radio waves from the first identified pulsar - this is a graph of radio waves from the same pulsar, but recorded in 2012.

Origins of the Joy Division artwork

Stephen Morris (the Joy Division drummer) found this image in the Cambridge Encyclopedia of Astronomy. It’s a graph showing the regularity of the pulses of CP 1919 (PSR B1919+21) - the first radio pulsar discovered (1967).

Peter Saville took this image and inverted it for the for the cover of the Unknown Pleasures, there’s more information in the video below.

This blog post gives some information about when the graph appeared.


Disclaimer: I know almost nothing about pulsars

The data for the graph above comes from Pulsar Group CSIRO Astronomy and Space Science.

Recording of the first-discovered pulsar CP1919 (PSR B1919+21) made at the Parkes radio telescope in April 2012. The observing frequency was 732 MHz and bandwidth 64 MHz. The audio is the detected and dedispersed signal modulating white noise.

Credit: R. N. Manchester, G. Hobbs and J. Khoo, CSIRO Astronomy and Space Science

You can listen to it here - you’re listening to a star which is pretty mental.

I took that wav file and had a look at the data with sox (I eventually pulled out numbers with canvas_waveform).

Spectogram of CP1919 (PSR B1919+21)

There was a lot of blank space in the data. I’m not sure if this is because of the way that I extracted it, or the way that it had been pre-processed, or maybe that equiptment has changed over the last 40 years.

When I managed to plot it, it’s no-where as pretty as the original one. I’ve just been talking to @olorton about why this might be the case - we think it might be because I’m plotting amplitude rather than frequency, he’s going to have a hack tonight to see if he can get something better. If you have any ideas, please do give me a shout.


Looking at the original plot, I thought that the peaks obscuring the preceeding ones above meant a loss of information. Though when I plotted it with a transparency it became a lot harder to read - you don’t know whether lines were going up or down and it looks a bit messy too. Though I could see that some lines were totally flat, which is cool and I don’t know why.

Another thing that I realised when I came to plot the graph was that there is a period of silence between each of the peaks (the pulsar has a period of 1.33730s and a 0.04s pulse width) - I’ve added a toggle so that you can see pluses along with the silence.

Both of these points are good cases for the original visualisation which is to show how periodic the pulsar is (I think).

Also, this shows 80 cycles of the pulsar - so it covers about 1m47s (I was slightly sad that none of the songs on the album are this length).

Other things

If you liked listening to the stars, check out this list of unexplained sounds on Wikipedia, they are mostly things in the sea and pretty awesome. I like how the 52-hertz whale is described as being “just higher than the lowest note on a tuba”.

Context Require 01 October 2013

This is how I organise JavaScript assets on this site.

view code on github


When I started this blog, I knew that I wanted to include specific and varied scripts on each post.

I didn’t want to serve all my js files together in one blob with each page load:

  • The built js file would get bigger with each post I add
  • I wanted the flexibility to use any new library that I came across (didn’t want to think “I’ve already used X, so I’ll just use that”)


I use require.js to modularise my code. If I want to make part of the document fancy - then I define it in a file called fancy.js like this:

define(['jquery'], function($){
  return function(element){

… the module defines a function that can be applied with a particular dom node.

Then, in the markup - I specify which module I want to apply to a particular piece of markup:

<div class="cr" data-cr="fancy">

I then stitch this together with another require.js module which looks through the page, loads any modules and applies them appropriately. It looks something like this (I’ve used jQuery here for succinctness):

  var self = this, requirement = $(self).data('cr');

  require(requirement, function(module){


Now I can load only the necessary scripts to display a page; which vary across pages on my blog:

  • lllocal - only loads jQuery and a plugin to thumb between images
  • tweet-globe - will load in a datafile and a vector manipulation library
  • wtcss - won’t load any extra libraries


I’ve enjoyed using this approach - it’s made it really easy to add new posts. I’ve felt like I’ve been writing code rather than tweaking and maintaining it.

It also feels like a good distinction of concerns - by starting with the html/dom I’ve focussed on what I’m trying to enhance with javascript.

Only loading scripts for on-screen elements

This approach kind of sucked for my homepage though - I’ve got all my posts in full, so every single script would be loaded.

So I rewrote my script to defer the loading of a module until the related element is on-screen. It looks something like this (again, jQuery here for brevity):

// using jquery.inview
$('.cr-defer').one('inview', function(){
  var self = this, requirement = $(self).data('cr');

  require(requirement, function(module){

I’ve written a way to display the modules as they are loaded, which you can turn on with the button below (if the module loaded okay!)

This should reload the page with a panel to the left which will display:

  • cr - the script with loads in the modules for the page
  • cr-debug - the module that displays the panel on the left
  • ko - knockout, which is used to update the panel

as you scroll down the page, you should see more modules loading in as you go past the posts.

Limitations / solutions

I can use this approach because I’ve got independent bits of content. Creating larger scale interconnected sites requires a lot more thought and planning. Addy Osami did a great talk on building large scale JS applications at last years jQueryUK. He also has an online book - Learning JavaScript Design Patterns which is worth a read.

The other limitation of this approach is performance. Require.js has a great build tool which lets you compile your components into a single file - though this would defeat the purpose of what I was trying to do in the first place.

The issue isn’t with the size of download, but the waterfall effect that happens when each dependency is loaded (as a module must be loaded before the dependencies can be found). This (and a solution to this) is described brilliantly in a presentation at last years JSConfEU - A novel, efficient approach to JavaScript loading

Also, if you’re interested in this kind of stuff - have a read of Alex Sexton’s blog post about deploying javascript applications.

I have a feeling that this is the kind of problem that people have dealt with or have had ideas about before. I’d really love to hear what you think - ping me on twitter or comment on hacker news.

The other side of responsive 20 September 2013

Yesterday I gave a talk "The other side of responsive" which was about how responsive web development gives us a great platform for creating interfaces that combine multiple devices. This post explains some of the tech/approaches that I used for it.

I'm writing this in a car, with the limit of a half charged laptop, so apologies for any mistakes or over-wordiness. Also - for context - my mother (who is driving) is playing Mozart Clarinet Concerto in A really loud, which is awesome.

The setup

My laptop has a node.js server which does two things:

  1. Serve the static content of the presentation (written with reveal.js)
  2. Host a binary.js server which publishes anything sent to it to any other connected browsers

My phone has a 3g connection, and acts as a hotspot for my laptop (I would have used the wifi, but it was a bit shakey on my phone - this worked a lot better)

There’s a page hosted at which has the markup for each of the phone slides, and some JavaScript to link it up to my talk. I took some effort to make this as performant as possible (from accessing the web server with your phone, the “hello” is able to display within the first network roundtrip!); so I was glad to hear Drew talk about web performance, it’s such an important aspect of working with the web.

I used PubNub to communicate with the devices in the room. I had two channels, one to give the status of the slides and another for devices to publish information about themselves and forward touch events when we “went collaborative”. The publish / subscribe style worked brilliantly for this - all devices would publish and the slide deck would be the only subscriber, and the other way round for the slide states. PubNub has a few features which were really useful for this:

  1. multiplexing - this meant that your device only needed one connection for both of the channels.
  2. windowing - this option let my slides recieve messages in 500ms batches, which fixes the number of requests that my laptop would make, regardless of how many people connected.
  3. backfill - if you were to refresh a device, all the ‘hot’ messages would be sent down, so the browser would be able to replay them all and catch up with all the other devices; this also allows people to join in half way through.

The talk

My first slide was the short url for the with a counter below it. When someone loads the page, there is a script that:

  1. Generates and locks down a uuid for the user, so that reloading the page won’t create more devices
  2. Uses modernizr to find the capabilities of the device
  3. Subscribes to the slide deck messages
  4. Publishes a ‘hello’ message

The hello message looks something like this:

	uuid: 'some-long-random-id',
	type: 'hello',
	features: 'appcache webgl webrtc ...',
	pixels: 1234567,
	innerC: 'red', // random colours
	outerC: 'blue' // for the circles

The counter on the slide deck increments when it gets one of these messages, the features and colours are stored - so from this point I know that I can display the capabilities chart (which is nice).

I then continue to the title slide and wave my hands about a bit. I’ve got the slides open on my phone too, when I go to the next slide I use binary.js to broadcast a message to all other browsers, which proceed to that slide.

The next slide is a file input field, which looks like this:

<input type="file" id="photo" accept="image/*" capture="camera"/>

The capture attribute means that it fires up the camera on my phone rather than asking where I want to get my file from.

When I take and accept the picture of the geek night, it is streamed with binary.js to all other connected slide decks - it’s based on this binary.js example.


Once the image is in the slide deck, this sequence of things happens:

  1. It’s displayed on the presentation
  2. It starts uploading to s3
  3. The s3 url is published to all devices
  4. It’s base64 encoded and sent to twitter (using codebird, which gives you a proxy to the twitter api for client side apps)
  5. The twitter embed html requested
  6. The twitter embed html is appended to the presentation is published to all devices
  7. The twitter widget script is added to render the tweet (this also happens on the devices)

So, at this point - the the last slide is rendered (on the devices as well). Also, the devices are displaying the picture on screen (I forgot to say that).

The next slide is the interactive slide of circles representing each device, this is an svg generated with d3. There is a basic animation loop which applies and dampens the speed of a circle and repels it from any nearby circles, this was running for the last couple of slides - so they’ve kind of organised themselves into a nice pattern.

D3 is fantastic, the enter/exit/transition approach is so intuitive for dynamic data - if someone joined at this point, a circle would pop onto the page and everything would just carry on.

By this point a coinciding circle was displayed on each device, when anyone pressed their circle, the x/y coordinates were published with PubNub. When received by the presentation, the x/y speed on the underlying data is incremented accordingly - d3 does the rest.

Moving to the table of capabilities is just stopping the animation loop, then transitioning the elements to new positions and appending new elements for each capability. I <3 D3.

I then used 3 of Brad Frosts slides from his blog post “this is the web”. Each slide sends a message out to all the devices which keep in sync. The middle slide (This is the web) just displayed the word “web” on the devices rather than the image, to show that your device is part of that.

The last slide was a quote by Igor Stravinsky about the freedom of constraints. The slide only showed part of the quote (highlighted below) - the full quote was displayed on each of the devices.

My freedom thus consists in my moving about within the narrow frame that I have assigned to myself for each one of my undertakings. I shall go even further: my freedom will be so much the greater and more meaningful the more narrowly I limit my field of action and the more I surround myself with obstacles. Whatever diminishes constraint diminishes strength. The more constraints one imposes, the more one frees oneself of the claims that shackle the spirit.

I like that quote.

Graphing links 08 August 2013

This is an example of displaying content pulled from a PhantomJS webservice

I used this example when I talked about “serving websites to websites with PhantomJS” at this months Oxford Geek Nights

With phantomjs you are able to access more than just the HTML/DOM of a page - how the page is eventually rendered in a browser. In this example - we can pull out all the links of a page and find out what area (in pixels) they consume.

link areas on oxford geek nights

Using this script we can get a map of the links to the element areas, which looks something like this:


Graphing the data

We can now pull in this data with AJAX and render it on the page using d3 (the force directed graph layout)

initially populated with this blog post, double click to check one of the urls.


Using PhantomJS WebServer 28 July 2013

The PhantomJS WebServer module lets you create self contained web applications that are easy to deploy to heroku using the PhantomJS build pack.

I’ll be talking about this at Oxford geek nights on the 7th of August - come along if you’re in the area.

(tl;dr - deployed example here & more involved app here)

Let’s start with a base PhantomJS script - this loads the Oxfordshire lanyrd page and outputs the names of any upcoming events:

var page = new WebPage();"", function(){
  var events = page.evaluate(function(){
    return $('.vevent .summary').map(function(e){ 
      return '* ' + this.innerText

  console.log('Upcoming Events in Oxfordshire:');


This script can be run with phantomjs example.js and it will print the names of all upcoming events in the terminal - something like this:

Upcoming Events in Oxfordshire:
* Oxford Geek Night 32
* WitneyMeets
* XML Summer School 2013
* Sterling Geo Intergraph ERDAS UK UGM
* All Your Base Conference 2013
* jQuery UK 2014
* World Humanist Congress 2014

…super cool. Have a look at the quick start guide on the PhantomJS wiki to find out how this works and what other things are possible.

Using the webserver module

To expose this script with the webserver module, you have to add a few things:

// import the webserver module, and create a server
var server = require('webserver').create();

// start a server on port 8080 and register a request listener
server.listen(8080, function(request, response) {

  var page = new WebPage();"", function(){
    var events = page.evaluate(function(){
      return $('.vevent .summary').map(function(e){ 
        return '* ' + this.innerText

    // Rather than console logging, write the data back as a 
    // response to the user
    // console.log('Upcoming Events in Oxfordshire:');
    // console.log(events);

    response.statusCode = 200;
    response.write('Upcoming Events in Oxfordshire:\n');

    // We want to keep phantom open for more requests, so we
    // don't exit the process. Instead we close the page to
    // free the associated memory heap
    // phantom.exit();



This can be run in the same way as the previous script - phantomjs example.js - then when you visit localhost:8080, you should see the list of events in your browser.

localhost:8080 - list of events from lanyrd

With phantomjs, you’re not limited to sending plain text back to the client - you can render images of the webpage and send that back (either by reading the file back with the File System Module, or using base 64 to send back an embeddable data-uri).


There is a PhantomJS Buildpack for heroku which makes deploying lovely.

To get your app ready for deployment you have to do a few things:

Set the port based on environment variable PORT

var port = require('system').env.PORT || 8080; // default back to 8080
server.listen(port, function(request, response) {

Add a file named Procfile containing the command to spin it up:

web: phantomjs example.js

Commit your files to git then create a heroku app with the build pack

heroku create --stack cedar --buildpack
Creating quiet-lowlands-5118... done, stack is cedar BUILDPACK_URL= | Git remote heroku added

Push your code up to heroku

git push heroku master
Counting objects: 10, done. Delta compression using up to 4 threads. Compressing objects: 100% (7/7), done. Writing objects: 100% (10/10), 1.34 KiB, done. Total 10 (delta 2), reused 0 (delta 0) -----> Fetching custom git buildpack... done -----> PhantomJS app detected -----> Fetching PhantomJS 1.9.0 binaries at -----> Extracting PhantomJS 1.9.0 binaries to /tmp/build_2idj4c8tadrpx/vendor/phantomjs -----> Discovering process types Procfile declares types -> web Default types for PhantomJS -> console -----> Compiled slug size: 15.5MB -----> Launching... done, v5 deployed to Heroku To * [new branch] master -> master

The example app should now be available on the reported url (in this case:

View Deployed Example Site GitHub Source

A more involved example

I’ve put together a more complex version of this style of app - it allows you to specify any webpage, renders a screenshot and returns some information about the page (a list of links).

It also serves a static page with a form to submit the requests to the app. It’s deployed to and the source code is on github.

I’ve tried to make it an easy project to modify for your own use - so fork away and have a hack!

screenshot of example code

View Demo GitHub Source

A few issues / gotchas

  • Mongoose (the embedded server) doesn’t parse the POST parameters with the default jQuery contentType header ‘application/x-www-form-urlencoded; charset=UTF-8’; I had to drop the charset and it seemed to work okay.
  • Firefox has trouble parsing large data-uri strings in json objects, so I’ve split the image and json on separate lines and decode them when the request comes back (unfortunately Firefox fails to add the xhr header that fixes the mongoose error)
  • GET parameters aren’t parsed. I’d much sooner use a GET request for this app, as there’s not any state change and it would allow the responses to be cached. In wtcss I fudged this parsing.
  • Sometimes the page render returns a blank image, especially when on heroku and under stress. This is a known issue - a work around is to wrap the .render in a setTimeout.

Finding which jQuery modules you need with 01 July 2013

For a couple of releases now - it's been possible to build a customised version of jQuery. I felt that one of the barriers for using your own version was finding which modules your site actually uses - so I started working on is a tool for finding which modules of jQuery you’re using - it does this by providing an instrumented version of the library which tracks which functions you use.


Once the script has tracked which modules have been used, the data is shared with the main page where the list of your dependencies can be updated. Eventually we’ll add a means of sending events between devices (so you can test on a phone, and see the results on a development machine).

your custom domain is able to share data with by embedding a hidden iframe which can share messages with the main window using storage events. See my blog post for more on those.

Early days

There’s a lot I want to do with this tool. I started with the goal of reducing the number of jQuery Mobile plugins I was using on a page (jQuery Mobile has a great download builder), though as I’ve been hacking about with the idea, it feels like it could do a lot more.


I’ve recently been working on a project with the Open Data Institute. One of the (many) things I’ve enjoyed is the open development; working in the public does seem awkward/hard at first, though I found myself writing better code, getting useful feedback, and being able to collaborate much more easily.

With - I’m going to make an effort to make development as transparent and open as possible. My hope is that people will be able to get involved with discussions on issues & pull requests. My dream is that people can get involved with improving the tool and adding functionality that I didn’t think of.

So far I’ve been trying to make the code as friendly and accessible as possible (no obscure templating languages, no fancy asset processing). Nodejitsu have provided the hosting under their “free for open source”.

White October

One of the things I’m most excited about is that White October (my employer) will be putting some time toward this project as part of an ongoing crusade of being totally awesome.

This week some of us will be looking at the project from a development, ux and design point of view. We’ll be throwing around a lot of ideas and feeding them into the site, so do keep an eye on the project and feel free to jump in on discussions or pull requests.

You can find the project at and on github under benfoxall/shaker.

Visualising CSS selector matches 09 May 2013

I was working with a large css codebase and wanted to see if our rules were becoming more specific as the css source grew, so I built to look at how css rules are applied to a page.

Active rules

This shows how many of the selectors are being used on a page, you can toggle to show only the active ones.


The ‘-‘ link on the bottom right scales the rules so that they fit the height of the window. This is to show the how the impact changes as rules are added to the css.

An example (with notes) of the jsOxford site is below:

 How it works

The main part of this is a PhantomJS script which 1. loads the page 2. extracts all stylesheet rules 3. finds matching elements for each rule and gets the positions of them 4. takes a screenshot

All this is sent back to the client in a json object (including the image as a data-uri).

The source code is now online at

Example pages

  • - some styled elements are offscreen
  • hacker news - only 31 css selectors!
  • facebook - only 5% of rules match on landing page
  • - yup, you can do that
  • white october - we used a custom bootstrap build, though you can see the gaps in the scaffolding sizes we didn’t use
  • jsoxford - you can see the rules at the bottom that we added to target specific elements

Cross window communication part 1 24 April 2013

I was part of the "Rising Stars" track at the jQuery UK conference this year where I talked about sending messages between browser windows. This post covers the first half of my talk - sending events between local windows.

My slides are now online, though they are more prompts for me to talk, rather than being full of information. The demos wouldn’t really work with it being publically accessible, so I’m going to cover each of the techniques I mentioned on this blog.

The websockets/binaryJS/webRTC things are on the way - just working on getting the server side part hosted nicely.


When you have a window that you can reference from with js - either by getting an iframe from the DOM, or as returned by - you can use postMessage to communicate with that window (crucially, even if that window has a different origin).

// Send messages from parent window
var win ='','','width=200');
document.onselectionchange = function(e){
	win.postMessage(document.getSelection().toString(), '*' );

// (on the target window) listen for messages 
window.addEventListener('message', function(e){
	echo.textContent =;

>demo opens a window and sends it the text selection from this page

For more information about postMessage - check out the entry on MDN and on John Resigs blog post about it.

Storage Events

When you aren’t able to access a window directly, but it shares the same origin - you can use storage events to synchronise data between windows.

A storage event is fired when another window changes the localStorage for that page. By listening to these events - you can keep objects in sync across windows.

// listen for changes from other windows
window.addEventListener("storage", function(e){
	if(e.key == 'example') $('#el').css(JSON.parse(e.newValue));
}, false);

// update a local element and notify other windows of the change

A nice side effect of this is that you have the state of an element persisted in localStorage, so you could render that on page load. See this gist for a general way of doing this.

This approach can become particularly interesting when the data being synced is displayed in different ways in different windows - in my talk I showed how the reveal.js slide deck could be viewed in both overview and normal views at the same time (see this gist to see how that can be implemented).

demo move your mouse over the area below, any other windows open on this page will update

Reading QR codes from getUserMedia with web workers 14 April 2013

tl;dr - examples (currently requires chrome):

>with web worker (should be smoother)

>without web worker

Web workers let you to take JavaScript execution off the main UI thread - which can be really useful if you are doing complex things with video

I came across a javascript qr-code reader a few days ago. When I started using it to scan from a getUserMedia stream - it worked fine, but the extra processing was blocking the ui, which was particularly noticable when you’re displaying the video.

I thought it was a pretty good candidate for taking the processing off to a web worker; which turned out pretty well.

Scanning QR code with getUserMedia

Once you’ve got the imageData from your canvas, you can run it through jsqrcode by setting attributes of the qrcode object, then call .render():

qrcode.imagedata = imagedata;
qrcode.width = imagedata.width;
qrcode.height = imagedata.height;

var content = qrcode.process();

It was pretty straightforward to pull the code into a web worker, I spent a bit of time before I realised that console.logs were making it fall over. Here’s the interface for responding to messages with the worker:

self.onmessage = function(event) {
    var imagedata =;
    qrcode.imagedata = imagedata;
    qrcode.width = imagedata.width;
    qrcode.height = imagedata.height;

    var resp;
        resp = qrcode.process();
    } catch(e){
        resp = ''; // *mostly* "no code found"

Back in the original page, you can creater the worker and deliver messages to it using the .postMessage function. You can optionally list Transferable objects to efficiently move them to the web worker.

var worker = new Worker("jsqrcode/worker.js"),
worker.onmessage = function(event) {
    console.log("qr code is:" +;

// imagedata = ctx.getImageData(…)
worker.postMessage(imagedata, []);

Jsqrcode is on github, as is my fork with the starts of the worker interface. You can either view source on the examples above, or view them on github.

Tweet Globe 07 April 2013

Plotting geocoded tweets on a globe with canvas

Requires Canvas Support

I gathered a few hours of geocoded tweets from the twitter streaming api (using the maptime code as a base). This was to explore some ideas that we’d been talking about at White October.

Drawing the globe is relatively straightforward. The lat/long pairs are converted into position vectors, which are then transformed based on the mouse position. The Sylvester library was pretty handy for transforming the points (Pete talked about Sylvester at jsoxford recently).

The original plan was to animate this over a period of time, though it looked quite random/noisy so I went for this static view instead.

Gareth pointed me to a post about the effects in tron legacy which makes me want to make this a lot more awesome!

lllocal 05 April 2013

I built lllocal - which lets you find and listen to bands that will be playing in your area soon.

I’ve been thinking of this idea for a long time and I put up a public version a couple of months ago. It’s brilliant to get feedback from people and I’ve also got tickets to two events while testing it out for myself (Keaton Henson was great and we’re off to see Daughter in a couple of weeks).

My motivation for lllocal came from:

  • my personalised last.fmevents near Oxford’ are almost all in London. Granted, these bands are ones who I’d really like to see and Oxford is pretty close to London - though I wanted to see more live music without spending evenings on a bus.
  • we got tickets send out spotify playlists to their mailing lists. I love this style of suggesting people to go and listen to.

Lllocal inspiration

The gig listings come from the api and the Spotify web apis are used to find and play the bands on spotify.

There is still a huge amount to do - though I’m really happy to have something online. If you have any feedback or suggestions I’d love if you got in touch or left a message on the feedback page.

Lllocal is online at

Maptime 05 December 2012

In time for our first JS Oxford meet - I put together a small node app which reads geocoded tweets from the twitter streaming api and pushes them to the browser to display on a map.

This is a stripped down version of a project that I worked on at White October this summer. This version is not at all for production use (your browser will grind to a halt if you leave it running for a while!), though I hope it’s a good/interesting example of linking up server and client js.

The code is all up at the jsoxford github account, I’ll go over a few bits of it:


This is the main node.js file it brings in some external packages:

  • ntwitter - for accessing the twitter streaming api
  • express - to serve some static files
  • faye - for sending messages between the server and the browser

Once these have been brought in, you can connect to the streaming api using ntwitter. This gives you access to a stream object, which you can add listeners for new tweets using the stream.on() function (see eventEmitter docs for more details).'statuses/filter', filterParams, function(stream) {
  // stream.on('data', yayFn)

We then want to serve some static files for our client side pages/scripts, you can use express to do this (express can do a whole lot more - if you want to have a look, I’d recommend using the executable to generate a basic app).

We also want to send data to the browser using faye, this has a really nice pubsub api based on the bayeux protocol. Attaching this to the http server will listen for websocket/ajax long-polling requests and serve a client js wrapper at /faye.js.

var app = express();
app.use(express.static(__dirname + '/public'));

var bayeux = new faye.NodeAdapter({mount: '/faye'});

var server = http.createServer(app);

Now, to link it all together - you can listen for events on the twitter stream, then publish them to a faye channel with the following code.

stream.on('data', function(data){
      .publish('/tweet', {
        geo: data.geo,
        text: data.text


Moving clientside (this code is in ./public and will be served to the browser), we first want to connect to the faye pubsub. To do this, we include the faye client library and connect to the endpoint that we mounted faye at on the server using Faye.Client.

<script type="text/javascript" src="/faye.js"></script>
<script type="text/javascript">
var client = new Faye.Client('/faye');

We’re using the google maps api to display the map and place the markers. The majority of the code for this is straight from the simple-markers example. (To get more of an introduction - have a look at the tutorial).

To get the tweet data from Faye, you use the client.subscribe function to listen to a channel - in this case we broadcast them over ‘/tweet’ from node.

var mapOptions = {
	// ...
var map = new google.maps.Map(document.getElementById("map"),mapOptions);

client.subscribe('/tweet', function(message) {
  if(message.geo && message.geo.coordinates){

function placeMarker(coords){
  var latlng = new google.maps.LatLng(coords[0],coords[1]);
  new google.maps.Marker({
    position: latlng,
    map: map,

And that’s it! Have a look at the code on github (I’ve missed out a little bit of the surrounding bumf above) and have a play with it.

Also, if you are based around Oxford - come along to our next JSOxford meet on the 17th of January.

Lastfm Canvas Streamgraph 04 November 2012

A browser based streamgraph using canvas.

This is based on Lee Byrons listening histories project. I love this project - it’s a really interesting and engaging visualisation, and the data makes it really personal (I can’t think of any other services that give as much personalised data as

There are services that let you download a pdf streamgraph: & playground (if you’re a subscriber).

My version is different as all the api requesting and graph drawing are done in the browser - this lets you see your graph as soon as any data is ready.

I originally started creating a large svg for the whole chart, though this became quite slow, so I used separate canvas elements for each week of data and . This is slightly limiting - I couldn’t sort or colour the artists based on when they appear in your history (as the original does).

Out With The Old 25 September 2012

After more than a year of no posts - I've left my old blog behind.

This new one is built with Jekyll and put it online with GitHub Pages. The source is on github.

It seems that a lot of Jekyll sites start with a post about the interesting way that they have been deployed. So, for the record, I kept it simple and went for the github jekyll generator.