BenjaminBenBen @benjaminbenben

Unknown Pleasures 21 October 2013

The artwork for Joy Division's Unknown Pleasures album is based on a graph of radio waves from the first identified pulsar - this is a graph of radio waves from the same pulsar, but recorded in 2012.

Origins of the Joy Division artwork

Stephen Morris (the Joy Division drummer) found this image in the Cambridge Encyclopedia of Astronomy. It’s a graph showing the regularity of the pulses of CP 1919 (PSR B1919+21) - the first radio pulsar discovered (1967).

Peter Saville took this image and inverted it for the for the cover of the Unknown Pleasures, there’s more information in the video below.

This blog post gives some information about when the graph appeared.

Data

Disclaimer: I know almost nothing about pulsars

The data for the graph above comes from Pulsar Group CSIRO Astronomy and Space Science.

Recording of the first-discovered pulsar CP1919 (PSR B1919+21) made at the Parkes radio telescope in April 2012. The observing frequency was 732 MHz and bandwidth 64 MHz. The audio is the detected and dedispersed signal modulating white noise.

Credit: R. N. Manchester, G. Hobbs and J. Khoo, CSIRO Astronomy and Space Science

You can listen to it here - you’re listening to a star which is pretty mental.

I took that wav file and had a look at the data with sox (I eventually pulled out numbers with canvas_waveform).

Spectogram of CP1919 (PSR B1919+21)

There was a lot of blank space in the data. I’m not sure if this is because of the way that I extracted it, or the way that it had been pre-processed, or maybe that equiptment has changed over the last 40 years.

When I managed to plot it, it’s no-where as pretty as the original one. I’ve just been talking to @olorton about why this might be the case - we think it might be because I’m plotting amplitude rather than frequency, he’s going to have a hack tonight to see if he can get something better. If you have any ideas, please do give me a shout.

Visualisation

Looking at the original plot, I thought that the peaks obscuring the preceeding ones above meant a loss of information. Though when I plotted it with a transparency it became a lot harder to read - you don’t know whether lines were going up or down and it looks a bit messy too. Though I could see that some lines were totally flat, which is cool and I don’t know why.

Another thing that I realised when I came to plot the graph was that there is a period of silence between each of the peaks (the pulsar has a period of 1.33730s and a 0.04s pulse width) - I’ve added a toggle so that you can see pluses along with the silence.

Both of these points are good cases for the original visualisation which is to show how periodic the pulsar is (I think).

Also, this shows 80 cycles of the pulsar - so it covers about 1m47s (I was slightly sad that none of the songs on the album are this length).

Other things

If you liked listening to the stars, check out this list of unexplained sounds on Wikipedia, they are mostly things in the sea and pretty awesome. I like how the 52-hertz whale is described as being “just higher than the lowest note on a tuba”.


Context Require 01 October 2013

This is how I organise JavaScript assets on this site.

view code on github

Motivation

When I started this blog, I knew that I wanted to include specific and varied scripts on each post.

I didn’t want to serve all my js files together in one blob with each page load:

  • The built js file would get bigger with each post I add
  • I wanted the flexibility to use any new library that I came across (didn’t want to think “I’ve already used X, so I’ll just use that”)

Approach

I use require.js to modularise my code. If I want to make part of the document fancy - then I define it in a file called fancy.js like this:

define(['jquery'], function($){
  return function(element){
    $(element).doFancyStuff()
  } 
})

… the module defines a function that can be applied with a particular dom node.

Then, in the markup - I specify which module I want to apply to a particular piece of markup:

<div class="cr" data-cr="fancy">
	<h1>RAINBOWS</h1>
	<p>UNICORNS</p>
</div>

I then stitch this together with another require.js module which looks through the page, loads any modules and applies them appropriately. It looks something like this (I’ve used jQuery here for succinctness):

$('.cr').each(function(){
  var self = this, requirement = $(self).data('cr');

  require(requirement, function(module){
    module(self);
  })
})

Profit

Now I can load only the necessary scripts to display a page; which vary across pages on my blog:

  • lllocal - only loads jQuery and a plugin to thumb between images
  • tweet-globe - will load in a datafile and a vector manipulation library
  • wtcss - won’t load any extra libraries

Dogfooding

I’ve enjoyed using this approach - it’s made it really easy to add new posts. I’ve felt like I’ve been writing code rather than tweaking and maintaining it.

It also feels like a good distinction of concerns - by starting with the html/dom I’ve focussed on what I’m trying to enhance with javascript.

Only loading scripts for on-screen elements

This approach kind of sucked for my homepage though - I’ve got all my posts in full, so every single script would be loaded.

So I rewrote my script to defer the loading of a module until the related element is on-screen. It looks something like this (again, jQuery here for brevity):

// using jquery.inview
$('.cr-defer').one('inview', function(){
  var self = this, requirement = $(self).data('cr');

  require(requirement, function(module){
    module(self);
  })
})

I’ve written a way to display the modules as they are loaded, which you can turn on with the button below (if the module loaded okay!)

This should reload the page with a panel to the left which will display:

  • cr - the script with loads in the modules for the page
  • cr-debug - the module that displays the panel on the left
  • ko - knockout, which is used to update the panel

as you scroll down the page, you should see more modules loading in as you go past the posts.

Limitations / solutions

I can use this approach because I’ve got independent bits of content. Creating larger scale interconnected sites requires a lot more thought and planning. Addy Osami did a great talk on building large scale JS applications at last years jQueryUK. He also has an online book - Learning JavaScript Design Patterns which is worth a read.

The other limitation of this approach is performance. Require.js has a great build tool which lets you compile your components into a single file - though this would defeat the purpose of what I was trying to do in the first place.

The issue isn’t with the size of download, but the waterfall effect that happens when each dependency is loaded (as a module must be loaded before the dependencies can be found). This (and a solution to this) is described brilliantly in a presentation at last years JSConfEU - A novel, efficient approach to JavaScript loading

Also, if you’re interested in this kind of stuff - have a read of Alex Sexton’s blog post about deploying javascript applications.

I have a feeling that this is the kind of problem that people have dealt with or have had ideas about before. I’d really love to hear what you think - ping me on twitter or comment on hacker news.


The other side of responsive 20 September 2013

Yesterday I gave a talk "The other side of responsive" which was about how responsive web development gives us a great platform for creating interfaces that combine multiple devices. This post explains some of the tech/approaches that I used for it.

I'm writing this in a car, with the limit of a half charged laptop, so apologies for any mistakes or over-wordiness. Also - for context - my mother (who is driving) is playing Mozart Clarinet Concerto in A really loud, which is awesome.

The setup

My laptop has a node.js server which does two things:

  1. Serve the static content of the presentation (written with reveal.js)
  2. Host a binary.js server which publishes anything sent to it to any other connected browsers

My phone has a 3g connection, and acts as a hotspot for my laptop (I would have used the wifi, but it was a bit shakey on my phone - this worked a lot better)

There’s a page hosted at benjaminbenben.com/party which has the markup for each of the phone slides, and some JavaScript to link it up to my talk. I took some effort to make this as performant as possible (from accessing the web server with your phone, the “hello” is able to display within the first network roundtrip!); so I was glad to hear Drew talk about web performance, it’s such an important aspect of working with the web.

I used PubNub to communicate with the devices in the room. I had two channels, one to give the status of the slides and another for devices to publish information about themselves and forward touch events when we “went collaborative”. The publish / subscribe style worked brilliantly for this - all devices would publish and the slide deck would be the only subscriber, and the other way round for the slide states. PubNub has a few features which were really useful for this:

  1. multiplexing - this meant that your device only needed one connection for both of the channels.
  2. windowing - this option let my slides recieve messages in 500ms batches, which fixes the number of requests that my laptop would make, regardless of how many people connected.
  3. backfill - if you were to refresh a device, all the ‘hot’ messages would be sent down, so the browser would be able to replay them all and catch up with all the other devices; this also allows people to join in half way through.

The talk

My first slide was the short url for the benjaminbenben.com/party with a counter below it. When someone loads the page, there is a script that:

  1. Generates and locks down a uuid for the user, so that reloading the page won’t create more devices
  2. Uses modernizr to find the capabilities of the device
  3. Subscribes to the slide deck messages
  4. Publishes a ‘hello’ message

The hello message looks something like this:

{
	uuid: 'some-long-random-id',
	type: 'hello',
	features: 'appcache webgl webrtc ...',
	pixels: 1234567,
	innerC: 'red', // random colours
	outerC: 'blue' // for the circles
}

The counter on the slide deck increments when it gets one of these messages, the features and colours are stored - so from this point I know that I can display the capabilities chart (which is nice).

I then continue to the title slide and wave my hands about a bit. I’ve got the slides open on my phone too, when I go to the next slide I use binary.js to broadcast a message to all other browsers, which proceed to that slide.

The next slide is a file input field, which looks like this:

<input type="file" id="photo" accept="image/*" capture="camera"/>

The capture attribute means that it fires up the camera on my phone rather than asking where I want to get my file from.

When I take and accept the picture of the geek night, it is streamed with binary.js to all other connected slide decks - it’s based on this binary.js example.

MKGN

Once the image is in the slide deck, this sequence of things happens:

  1. It’s displayed on the presentation
  2. It starts uploading to s3
  3. The s3 url is published to all devices
  4. It’s base64 encoded and sent to twitter (using codebird, which gives you a proxy to the twitter api for client side apps)
  5. The twitter embed html requested
  6. The twitter embed html is appended to the presentation is published to all devices
  7. The twitter widget script is added to render the tweet (this also happens on the devices)

So, at this point - the the last slide is rendered (on the devices as well). Also, the devices are displaying the picture on screen (I forgot to say that).

The next slide is the interactive slide of circles representing each device, this is an svg generated with d3. There is a basic animation loop which applies and dampens the speed of a circle and repels it from any nearby circles, this was running for the last couple of slides - so they’ve kind of organised themselves into a nice pattern.

D3 is fantastic, the enter/exit/transition approach is so intuitive for dynamic data - if someone joined at this point, a circle would pop onto the page and everything would just carry on.

By this point a coinciding circle was displayed on each device, when anyone pressed their circle, the x/y coordinates were published with PubNub. When received by the presentation, the x/y speed on the underlying data is incremented accordingly - d3 does the rest.

Moving to the table of capabilities is just stopping the animation loop, then transitioning the elements to new positions and appending new elements for each capability. I <3 D3.

I then used 3 of Brad Frosts slides from his blog post ”this is the web”. Each slide sends a message out to all the devices which keep in sync. The middle slide (This is the web) just displayed the word “web” on the devices rather than the image, to show that your device is part of that.

The last slide was a quote by Igor Stravinsky about the freedom of constraints. The slide only showed part of the quote (highlighted below) - the full quote was displayed on each of the devices.

My freedom thus consists in my moving about within the narrow frame that I have assigned to myself for each one of my undertakings. I shall go even further: my freedom will be so much the greater and more meaningful the more narrowly I limit my field of action and the more I surround myself with obstacles. Whatever diminishes constraint diminishes strength. The more constraints one imposes, the more one frees oneself of the claims that shackle the spirit.

I like that quote.


Graphing links 08 August 2013

This is an example of displaying content pulled from a PhantomJS webservice

I used this example when I talked about “serving websites to websites with PhantomJS” at this months Oxford Geek Nights

With phantomjs you are able to access more than just the HTML/DOM of a page - how the page is eventually rendered in a browser. In this example - we can pull out all the links of a page and find out what area (in pixels) they consume.

link areas on oxford geek nights

Using this script we can get a map of the links to the element areas, which looks something like this:

{
  "http://benjaminbenben.com/":6534,
  "http://bit.ly/Pesy75":10640,
  "http://lanyrd.com/cqfdw":18012,
  "http://mynameismartin.com/":8646,
  "http://oxford.geeknights.net/":84000,
  "http://oxford.geeknights.net/ogn29":52052,
  "http://oxford.geeknights.net/ogn30":52052,
  "http://oxford.geeknights.net/ogn31":52052,
  "http://oxford.geeknights.net/volunteer.html":29520,
  "http://torchbox.com/":4760,
  "http://twitter.com/oxfordgeeks":20112,
  "http://www.marianamota.com/":33128,
  "http://www.torchbox.com/":3360,
  "https://github.com/LuRsT":8580
}

Graphing the data

We can now pull in this data with AJAX and render it on the page using d3 (the force directed graph layout)

initially populated with this blog post, double click to check one of the urls.

examples


Using PhantomJS WebServer 28 July 2013

The PhantomJS WebServer module lets you create self contained web applications that are easy to deploy to heroku using the PhantomJS build pack.

I’ll be talking about this at Oxford geek nights on the 7th of August - come along if you’re in the area.

(tl;dr - deployed example here & more involved app here)

Let’s start with a base PhantomJS script - this loads the Oxfordshire lanyrd page and outputs the names of any upcoming events:

var page = new WebPage();

page.open("http://lanyrd.com/places/oxfordshire/", function(){
  var events = page.evaluate(function(){
    return $('.vevent .summary').map(function(e){ 
      return '* ' + this.innerText
    }).toArray().join('\n');
  });

  console.log('Upcoming Events in Oxfordshire:');
  console.log(events);

  phantom.exit();
});

This script can be run with phantomjs example.js and it will print the names of all upcoming events in the terminal - something like this:

Upcoming Events in Oxfordshire:
* Oxford Geek Night 32
* WitneyMeets
* XML Summer School 2013
* Sterling Geo Intergraph ERDAS UK UGM
* All Your Base Conference 2013
* jQuery UK 2014
* World Humanist Congress 2014

…super cool. Have a look at the quick start guide on the PhantomJS wiki to find out how this works and what other things are possible.

Using the webserver module

To expose this script with the webserver module, you have to add a few things:

// import the webserver module, and create a server
var server = require('webserver').create();

// start a server on port 8080 and register a request listener
server.listen(8080, function(request, response) {

  var page = new WebPage();

  page.open("http://lanyrd.com/places/oxfordshire/", function(){
    var events = page.evaluate(function(){
      return $('.vevent .summary').map(function(e){ 
        return '* ' + this.innerText
      }).toArray().join('\n');
    });

    // Rather than console logging, write the data back as a 
    // response to the user
    //
    // console.log('Upcoming Events in Oxfordshire:');
    // console.log(events);

    response.statusCode = 200;
    response.write('Upcoming Events in Oxfordshire:\n');
    response.write(events);
    response.close();

    // We want to keep phantom open for more requests, so we
    // don't exit the process. Instead we close the page to
    // free the associated memory heap
    //
    // phantom.exit();

    page.close();

  });
});

This can be run in the same way as the previous script - phantomjs example.js - then when you visit localhost:8080, you should see the list of events in your browser.

localhost:8080 - list of events from lanyrd

With phantomjs, you’re not limited to sending plain text back to the client - you can render images of the webpage and send that back (either by reading the file back with the File System Module, or using base 64 to send back an embeddable data-uri).

Deploying

There is a PhantomJS Buildpack for heroku which makes deploying lovely.

To get your app ready for deployment you have to do a few things:

Set the port based on environment variable PORT

var port = require('system').env.PORT || 8080; // default back to 8080
server.listen(port, function(request, response) {

Add a file named Procfile containing the command to spin it up:

web: phantomjs example.js

Commit your files to git then create a heroku app with the build pack

heroku create --stack cedar --buildpack http://github.com/stomita/heroku-buildpack-phantomjs.git
Creating quiet-lowlands-5118... done, stack is cedar BUILDPACK_URL=http://github.com/stomita/heroku-buildpack-phantomjs.git http://quiet-lowlands-5118.herokuapp.com/ | git@heroku.com:quiet-lowlands-5118.git Git remote heroku added

Push your code up to heroku

git push heroku master
Counting objects: 10, done. Delta compression using up to 4 threads. Compressing objects: 100% (7/7), done. Writing objects: 100% (10/10), 1.34 KiB, done. Total 10 (delta 2), reused 0 (delta 0) -----> Fetching custom git buildpack... done -----> PhantomJS app detected -----> Fetching PhantomJS 1.9.0 binaries at http://stomita-buildpack-phantomjs.s3.amazonaws.com/buildpack-phantomjs-1.9.0.tar.gz -----> Extracting PhantomJS 1.9.0 binaries to /tmp/build_2idj4c8tadrpx/vendor/phantomjs -----> Discovering process types Procfile declares types -> web Default types for PhantomJS -> console -----> Compiled slug size: 15.5MB -----> Launching... done, v5 http://quiet-lowlands-5118.herokuapp.com deployed to Heroku To git@heroku.com:quiet-lowlands-5118.git * [new branch] master -> master

The example app should now be available on the reported url (in this case: http://quiet-lowlands-5118.herokuapp.com).

View Deployed Example Site GitHub Source


A more involved example

I’ve put together a more complex version of this style of app - it allows you to specify any webpage, renders a screenshot and returns some information about the page (a list of links).

It also serves a static page with a form to submit the requests to the app. It’s deployed to phantomjs-webserver-example.herokuapp.com and the source code is on github.

I’ve tried to make it an easy project to modify for your own use - so fork away and have a hack!

screenshot of example code

View Demo GitHub Source

A few issues / gotchas

  • Mongoose (the embedded server) doesn’t parse the POST parameters with the default jQuery contentType header ‘application/x-www-form-urlencoded; charset=UTF-8’; I had to drop the charset and it seemed to work okay.
  • Firefox has trouble parsing large data-uri strings in json objects, so I’ve split the image and json on separate lines and decode them when the request comes back (unfortunately Firefox fails to add the xhr header that fixes the mongoose error)
  • GET parameters aren’t parsed. I’d much sooner use a GET request for this app, as there’s not any state change and it would allow the responses to be cached. In wtcss I fudged this parsing.
  • Sometimes the page render returns a blank image, especially when on heroku and under stress. This is a known issue - a work around is to wrap the .render in a setTimeout.

Finding which jQuery modules you need with shaker.io 01 July 2013

For a couple of releases now - it's been possible to build a customised version of jQuery. I felt that one of the barriers for using your own version was finding which modules your site actually uses - so I started working on shaker.io

shaker.io

Shaker.io is a tool for finding which modules of jQuery you’re using - it does this by providing an instrumented version of the library which tracks which functions you use.

Transport

Once the script has tracked which modules have been used, the data is shared with the main page where the list of your dependencies can be updated. Eventually we’ll add a means of sending events between devices (so you can test on a phone, and see the results on a development machine).

your custom domain is able to share data with shaker.io by embedding a hidden iframe which can share messages with the main window using storage events. See my blog post for more on those.

Early days

There’s a lot I want to do with this tool. I started with the goal of reducing the number of jQuery Mobile plugins I was using on a page (jQuery Mobile has a great download builder), though as I’ve been hacking about with the idea, it feels like it could do a lot more.

Openness

I’ve recently been working on a project with the Open Data Institute. One of the (many) things I’ve enjoyed is the open development; working in the public does seem awkward/hard at first, though I found myself writing better code, getting useful feedback, and being able to collaborate much more easily.

With shaker.io - I’m going to make an effort to make development as transparent and open as possible. My hope is that people will be able to get involved with discussions on issues & pull requests. My dream is that people can get involved with improving the tool and adding functionality that I didn’t think of.

So far I’ve been trying to make the code as friendly and accessible as possible (no obscure templating languages, no fancy asset processing). Nodejitsu have provided the hosting under their “free for open source”.

White October

One of the things I’m most excited about is that White October (my employer) will be putting some time toward this project as part of an ongoing crusade of being totally awesome.

This week some of us will be looking at the project from a development, ux and design point of view. We’ll be throwing around a lot of ideas and feeding them into the site, so do keep an eye on the project and feel free to jump in on discussions or pull requests.

You can find the project at http://shaker.io and on github under benfoxall/shaker.


Visualising CSS selector matches 09 May 2013

I was working with a large css codebase and wanted to see if our rules were becoming more specific as the css source grew, so I built css.benjaminbenben.com to look at how css rules are applied to a page.

Active rules

This shows how many of the selectors are being used on a page, you can toggle to show only the active ones.

Overview

The ‘-’ link on the bottom right scales the rules so that they fit the height of the window. This is to show the how the impact changes as rules are added to the css.

An example (with notes) of the jsOxford site is below:

 How it works

The main part of this is a PhantomJS script which

  1. loads the page
  2. extracts all stylesheet rules
  3. finds matching elements for each rule and gets the positions of them
  4. takes a screenshot

All this is sent back to the client in a json object (including the image as a data-uri).

The source code is now online at github.com/benfoxall/wtcss

Example pages

  • google.com - some styled elements are offscreen
  • hacker news - only 31 css selectors!
  • facebook - only 5% of rules match on landing page
  • css.benjaminbenben.com - yup, you can do that
  • white october - we used a custom bootstrap build, though you can see the gaps in the scaffolding sizes we didn’t use
  • jsoxford - you can see the rules at the bottom that we added to target specific elements

Cross window communication part 1 24 April 2013

I was part of the "Rising Stars" track at the jQuery UK conference this year where I talked about sending messages between browser windows. This post covers the first half of my talk - sending events between local windows.

My slides are now online, though they are more prompts for me to talk, rather than being full of information. The demos wouldn’t really work with it being publically accessible, so I’m going to cover each of the techniques I mentioned on this blog.

The websockets/binaryJS/webRTC things are on the way - just working on getting the server side part hosted nicely.

postMessage

When you have a window that you can reference from with js - either by getting an iframe from the DOM, or as returned by window.open() - you can use postMessage to communicate with that window (crucially, even if that window has a different origin).

// Send messages from parent window
var win = window.open('http://benjaminbenben.com/pink.html','','width=200');
document.onselectionchange = function(e){
	win.postMessage(document.getSelection().toString(), '*' );
}

// (on the target window) listen for messages 
window.addEventListener('message', function(e){
	echo.textContent = e.data;
});

>demo opens a window and sends it the text selection from this page

For more information about postMessage - check out the entry on MDN and on John Resigs blog post about it.


Storage Events

When you aren’t able to access a window directly, but it shares the same origin - you can use storage events to synchronise data between windows.

A storage event is fired when another window changes the localStorage for that page. By listening to these events - you can keep objects in sync across windows.

// listen for changes from other windows
window.addEventListener("storage", function(e){
	if(e.key == 'example') $('#el').css(JSON.parse(e.newValue));
}, false);

// update a local element and notify other windows of the change
$('#el').css({color:"red"});
localStorage.setItem('example','{color:"red"}');

A nice side effect of this is that you have the state of an element persisted in localStorage, so you could render that on page load. See this gist for a general way of doing this.

This approach can become particularly interesting when the data being synced is displayed in different ways in different windows - in my talk I showed how the reveal.js slide deck could be viewed in both overview and normal views at the same time (see this gist to see how that can be implemented).

demo move your mouse over the area below, any other windows open on this page will update


Reading QR codes from getUserMedia with web workers 14 April 2013

tl;dr - examples (currently requires chrome):

>with web worker (should be smoother)

>without web worker


Web workers let you to take JavaScript execution off the main UI thread - which can be really useful if you are doing complex things with video

I came across a javascript qr-code reader a few days ago. When I started using it to scan from a getUserMedia stream - it worked fine, but the extra processing was blocking the ui, which was particularly noticable when you’re displaying the video.

I thought it was a pretty good candidate for taking the processing off to a web worker; which turned out pretty well.

Scanning QR code with getUserMedia

Once you’ve got the imageData from your canvas, you can run it through jsqrcode by setting attributes of the qrcode object, then call .render():

qrcode.imagedata = imagedata;
qrcode.width = imagedata.width;
qrcode.height = imagedata.height;

var content = qrcode.process();

It was pretty straightforward to pull the code into a web worker, I spent a bit of time before I realised that console.logs were making it fall over. Here’s the interface for responding to messages with the worker:

self.onmessage = function(event) {
    var imagedata = event.data;
    qrcode.imagedata = imagedata;
    qrcode.width = imagedata.width;
    qrcode.height = imagedata.height;

    var resp;
    try{
        resp = qrcode.process();
    } catch(e){
        resp = ''; // *mostly* "no code found"
    }
    postMessage(resp);
};

Back in the original page, you can creater the worker and deliver messages to it using the .postMessage function. You can optionally list Transferable objects to efficiently move them to the web worker.

var worker = new Worker("jsqrcode/worker.js"),
worker.onmessage = function(event) {
    console.log("qr code is:" + event.data);
}

// imagedata = ctx.getImageData(…)
worker.postMessage(imagedata, [imagedata.data.buffer]);

Jsqrcode is on github, as is my fork with the starts of the worker interface. You can either view source on the examples above, or view them on github.


Tweet Globe 07 April 2013

Plotting geocoded tweets on a globe with canvas

Requires Canvas Support

I gathered a few hours of geocoded tweets from the twitter streaming api (using the maptime code as a base). This was to explore some ideas that we’d been talking about at White October.

Drawing the globe is relatively straightforward. The lat/long pairs are converted into position vectors, which are then transformed based on the mouse position. The Sylvester library was pretty handy for transforming the points (Pete talked about Sylvester at jsoxford recently).

The original plan was to animate this over a period of time, though it looked quite random/noisy so I went for this static view instead.

Gareth pointed me to a post about the effects in tron legacy which makes me want to make this a lot more awesome!


lllocal 05 April 2013

I built lllocal - which lets you find and listen to bands that will be playing in your area soon.

I’ve been thinking of this idea for a long time and I put up a public version a couple of months ago. It’s brilliant to get feedback from people and I’ve also got tickets to two events while testing it out for myself (Keaton Henson was great and we’re off to see Daughter in a couple of weeks).

My motivation for lllocal came from:

  • my personalised last.fmevents near Oxford’ are almost all in London. Granted, these bands are ones who I’d really like to see and Oxford is pretty close to London - though I wanted to see more live music without spending evenings on a bus.
  • we got tickets send out spotify playlists to their mailing lists. I love this style of suggesting people to go and listen to.

Lllocal inspiration

The gig listings come from the last.fm api and the Spotify web apis are used to find and play the bands on spotify.

There is still a huge amount to do - though I’m really happy to have something online. If you have any feedback or suggestions I’d love if you got in touch or left a message on the feedback page.

Lllocal is online at lllocal.com.


Maptime 05 December 2012

In time for our first JS Oxford meet - I put together a small node app which reads geocoded tweets from the twitter streaming api and pushes them to the browser to display on a map.

This is a stripped down version of a project that I worked on at White October this summer. This version is not at all for production use (your browser will grind to a halt if you leave it running for a while!), though I hope it’s a good/interesting example of linking up server and client js.

The code is all up at the jsoxford github account, I’ll go over a few bits of it:

app.js

This is the main node.js file it brings in some external packages:

  • ntwitter - for accessing the twitter streaming api
  • express - to serve some static files
  • faye - for sending messages between the server and the browser

Once these have been brought in, you can connect to the streaming api using ntwitter. This gives you access to a stream object, which you can add listeners for new tweets using the stream.on() function (see eventEmitter docs for more details).

twit.stream('statuses/filter', filterParams, function(stream) {
  // stream.on('data', yayFn)
});

We then want to serve some static files for our client side pages/scripts, you can use express to do this (express can do a whole lot more - if you want to have a look, I’d recommend using the executable to generate a basic app).

We also want to send data to the browser using faye, this has a really nice pubsub api based on the bayeux protocol. Attaching this to the http server will listen for websocket/ajax long-polling requests and serve a client js wrapper at /faye.js.

var app = express();
app.use(express.static(__dirname + '/public'));

var bayeux = new faye.NodeAdapter({mount: '/faye'});

var server = http.createServer(app);
bayeux.attach(server);
server.listen(3000);

Now, to link it all together - you can listen for events on the twitter stream, then publish them to a faye channel with the following code.

stream.on('data', function(data){
  if(data.geo)
    bayeux.getClient()
      .publish('/tweet', {
        geo: data.geo,
        text: data.text
      });
});

markers.html

Moving clientside (this code is in ./public and will be served to the browser), we first want to connect to the faye pubsub. To do this, we include the faye client library and connect to the endpoint that we mounted faye at on the server using Faye.Client.

<script type="text/javascript" src="/faye.js"></script>
<script type="text/javascript">
var client = new Faye.Client('/faye');
</script>

We’re using the google maps api to display the map and place the markers. The majority of the code for this is straight from the simple-markers example. (To get more of an introduction - have a look at the tutorial).

To get the tweet data from Faye, you use the client.subscribe function to listen to a channel - in this case we broadcast them over ‘/tweet’ from node.

var mapOptions = {
	// ...
};
var map = new google.maps.Map(document.getElementById("map"),mapOptions);

client.subscribe('/tweet', function(message) {
  if(message.geo && message.geo.coordinates){
    placeMarker(message.geo.coordinates);
  }
});

function placeMarker(coords){
  var latlng = new google.maps.LatLng(coords[0],coords[1]);
  new google.maps.Marker({
    position: latlng,
    map: map,
    title:""
  });
}

And that’s it! Have a look at the code on github (I’ve missed out a little bit of the surrounding bumf above) and have a play with it.

Also, if you are based around Oxford - come along to our next JSOxford meet on the 17th of January.


Lastfm Canvas Streamgraph 04 November 2012

A browser based last.fm streamgraph using canvas.

This is based on Lee Byrons listening histories project. I love this project - it’s a really interesting and engaging visualisation, and the last.fm data makes it really personal (I can’t think of any other services that give as much personalised data as last.fm).

There are services that let you download a pdf streamgraph: lastgraph.aeracode.org & last.fm playground (if you’re a subscriber).

My version is different as all the api requesting and graph drawing are done in the browser - this lets you see your graph as soon as any data is ready.

I originally started creating a large svg for the whole chart, though this became quite slow, so I used separate canvas elements for each week of data and . This is slightly limiting - I couldn’t sort or colour the artists based on when they appear in your history (as the original does).


Out With The Old 25 September 2012

After more than a year of no posts - I've left my old blog behind.

This new one is built with Jekyll and put it online with GitHub Pages. The source is on github.

It seems that a lot of Jekyll sites start with a post about the interesting way that they have been deployed. So, for the record, I kept it simple and went for the github jekyll generator.