Journal tags: mashup

12

Pace layers and design principles

I think it was Jason who once told me that if you want to make someone’s life a misery, teach them about typography. After that they’ll be doomed to notice all the terrible type choices and kerning out there in the world. They won’t be able to unsee it. It’s like trying to unsee the arrow in the FedEx logo.

I think that Stewart Brand’s pace layers model is a similar kind of mind virus, albeit milder. Once you’ve been exposed to it, you start seeing in it in all kinds of systems.

Each layer is functionally different from the others and operates somewhat independently, but each layer influences and responds to the layers closest to it in a way that makes the whole system resilient.

Last month I sent out an edition of the Clearleft newsletter that was all about pace layers. I gathered together examples of people who have been infected with the pace-layer mindworm who were applying the same layered thinking to other areas:

My own little mash-up is applying pace layers to the World Wide Web. Tom even brought it to life as an animation.

See the Pen Web Layers Of Pace by Tom (@webrocker) on CodePen.

Recently I had another flare-up of the pace-layer pattern-matching infection.

I was talking to some visiting Austrian students on the weekend about design principles. I explained my mild obsession with design principles stemming from the fact that they sit between “purpose” (or values) and “patterns” (the actual outputs):

Purpose » Principles » Patterns

Your purpose is “why?”

That then influences your principles, “how?”

Those principles inform your patterns, “what?”

Hey, wait a minute! If you put that list in reverse order it looks an awful lot like the pace-layers model with the slowest moving layer at the bottom and the fastest moving layer at the top. Perhaps there’s even room for an additional layer when patterns go into production:

  • Production
  • Patterns
  • Principles
  • Purpose

Your purpose should rarely—if ever—change. Your principles can change, but not too frequently. Your patterns need to change quite often. And what you’re actually putting out into production should be constantly updated.

As you travel from the most abstract layer—“purpose”—to the most concrete layer—“production”—the pace of change increases.

I can’t tell if I’m onto something here or if I’m just being apopheniac. Again.

Plumbing

On Monday, I linked to Tom’s latest video. It uses a clever trick whereby the title of the video is updated to match the number of views the video has had. But there’s a lot more to the video than that. Stick around and you’ll be treated to a meditation on the changing nature of APIs, from a shared open lake to a closed commercial drybed.

It reminds me of (other) Tom’s post from a couple of year’s ago called Pouring one out for the Boxmakers, wherein he talks about Twitter’s crackdown on fun bots:

Web 2.0 really, truly, is over. The public APIs, feeds to be consumed in a platform of your choice, services that had value beyond their own walls, mashups that merged content and services into new things… have all been replaced with heavyweight websites to ensure a consistent, single experience, no out-of-context content, and maximising the views of advertising. That’s it: back to single-serving websites for single-serving use cases.

A shame. A thing I had always loved about the internet was its juxtapositions, the way it supported so many use-cases all at once. At its heart, a fundamental one: it was a medium which you could both read and write to. From that flow others: it’s not only work and play that coexisted on it, but the real and the fictional; the useful and the useless; the human and the machine.

Both Toms echo the sentiment in Anil’s The Web We Lost, written back in 2012:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

I know, I know. We’re a bunch of old men shouting at The Cloud. But really, Anil is right:

This isn’t our web today. We’ve lost key features that we used to rely on, and worse, we’ve abandoned core values that used to be fundamental to the web world. To the credit of today’s social networks, they’ve brought in hundreds of millions of new participants to these networks, and they’ve certainly made a small number of people rich.

But they haven’t shown the web itself the respect and care it deserves, as a medium which has enabled them to succeed. And they’ve now narrowed the possibilites of the web for an entire generation of users who don’t realize how much more innovative and meaningful their experience could be.

In his video, Tom mentions Yahoo Pipes as an example of a service that has been shut down for commercial and idealogical reasons. In many ways, it was the epitome of what Anil was talking about—a sort of meta-API that allowed you to connect different services together. Kinda like IFTTT but with a visual interface that made it as empowering as something like the Scratch programming language.

There are services today that provide some of that functionality, but they’re more developer-focused. Trys pointed me to Pipedream, which looks good but you need to know how to write Node.js code and import npm packages. I’m sure it’s great if you’re into serverless Jamstack lambda thingamybobs but I don’t think it’s going to unlock the potential for non-coders to create cool stuff.

On the more visual pipes-esque Scratchy side, Cassie pointed me to Cables:

Cables is a tool for creating beautiful interactive content.

It isn’t about making mashups, but it does look something that non-coders could potentially use to make something that looks cool. It reminds me a bit of Bret Victor and his classic talk on Inventing On Principle—always worth revisting!

Radio Free Earth

Back at the first San Francisco Science Hack Day I wanted to do some kind of mashup involving the speed of light and the distance of stars:

I wanted to build a visualisation based on Matt’s brilliant light cone idea, but I found it far too daunting to try to find data in a usable format and come up with a way of drawing a customisable geocentric starmap of our corner of the galaxy. So I put that idea on the back burner…

At this year’s San Francisco Science Hack Day, I came back to that idea. I wanted some kind of mashup that demonstrated the connection between the time that light has travelled from distant stars, and the events that would have been happening on this planet at that moment. So, for example, a star would be labelled with “the battle of Hastings” or “the sack of Rome” or “Columbus’s voyage to America”. To do that, I’d need two datasets; the distance of stars, and the dates of historical events (leaving aside any Gregorian/Julian fuzziness).

For wont of a better hack, Chloe agreed to help me out. We set to work finding a good dataset of stellar objects. It turned out that a lot of the best datasets from NASA were either about our local solar neighbourhood, or else really distant galaxies and stars that are emitting prehistoric light.

The best dataset we could find was the Near Star Catalogue from Uranometria but the most distant star in that collection was only 70 or 80 light years away. That meant that we could only mash it up with historical events from the twentieth century. We figured we could maybe choose important scientific dates from the past 70 or 80 years, but to be honest, we really weren’t feeling it.

We had reached this impasse when it was time for the Science Hack Day planetarium show. It was terrific: we were treated to a panoramic tour of space, beginning with low Earth orbit and expanding all the way out to the cosmic microwave background radiation. At one point, the presenter outlined the reach of Earth’s radiosphere. That’s the distance that ionosphere-penetrating radio and television signals from Earth, travelling at the speed of light, have reached. “It extends about 70 light years out”, said the presenter.

This was perfect! That was exactly the dataset of stars that we had. It was a time for a pivot. Instead of the lofty goal of mapping historical events to the night sky, what if we tried to do something more trivial and fun? We could demonstrate how far classic television shows have travelled. Has Star Trek reached Altair? Is Sirius receiving I Love Lucy yet?

No, not TV shows …music! Now we were onto something. We would show how far the songs of planet Earth had travelled through space and which stars were currently receiving which hits.

Chloe remembered there being an API from Billboard, who have collected data on chart-topping songs since the 1940s. But that API appears to be gone, and the Echonest API doesn’t have chart dates. So instead, Chloe set to work screen-scraping Wikipedia for number one hits of the 40s, 50s, 60s, 70s …you get the picture. It was a lot of finding and replacing, but in the end we had a JSON file with every number one for the past 70 years.

Meanwhile, I was putting together the logic. Our list of stars had the distances in parsecs. So I needed to convert the date of a number one hit song into the number of parsecs that song had travelled, and then find the last star that it has passed.

We were tempted—for developer convenience—to just write all the logic in JavaScript, especially as our data was in JSON. But even though it was just a hack, I couldn’t bring myself to write something that relied on JavaScript to render the content. So I wrote some really crappy PHP instead.

By the end of the first day, the functionality was in place: you could enter a date, and find out what was number one on that date, and which star is just now receiving that song.

After the sleepover (more like a wakeover) in the aquarium, we started to style the interface. I say “we” …Chloe wrote the CSS while I made unhelpful remarks.

For the icing on the cake, Chloe used her previous experience with the Rdio API to add playback of short snippets of each song (when it’s available).

Here’s the (more or less) finished hack:

Radio Free Earth.

Basically, it’s a simple mashup of music and space …which is why I spent the whole time thinking “What would Matt do?”

Just keep hitting that button to hear a hit from planet Earth and see which lucky star is currently receiving the signal.*

Science!

*I know, I know: the inverse-square law means it’s practically impossible that the signal would be in any state to be received, but hey, it’s a hack.

Next to Last.fm

I’m listening to Jessica play some music on iTunes and I can’t help but think what a shame it is that Last.fm has no knowledge of this. The MyWare of Last.fm only works for my devices; my iTunes, my Mobile Scrobbler. It would be nice if I could somehow let Last.fm know that I’m currently listening to the music of another Last.fm user. It would be even nicer if I didn’t need a computer to do it. Suppose I could just use my mobile phone to send a message to Twitter. Something like:

@lastfm listening to @wordridden

or:

@lastfm scrobbling @wordridden

There would need to be some corresponding method of switching of the link-up. I haven’t really thought it through that far. I’m just jotting down this idea in case anyone out there wants to try using the respective APIs to give this a whirl.

Location, location, location

A couple of months ago I wrote:

Jessica speculated a while back about reverse Google Maps. Suppose that when you entered an address, instead of just showing you the top-down view of that point on the planet, you also got to see how the sky would look from that point. Enter a postcode; view the corresponding starmap.

It isn’t in Google Maps yet but it is in Google Earth. The newest version features a button labeled “Switch between Sky and Earth”. This new Sky feature allows you to navigate photographs of space taken from the Palomar observatory and the Hubble telescope. It’s just one more example of what you can do with geodata.

Location information is the basis for a lot of the mashups out there—of which, Overplot remains my favourite. The possibilities in mashing up geodata with timestamps are almost limitless.

Getting datetime information is relatively easy. Every file created on a computer has a timestamp. Almost everything published on the Web is also timestamped: that’s the basis of lifestreams.

I look forward to the day when geostamps are as ubiquitous as timestamps. If every image, every blog post, every video, every sound file had a longitude and latitude as well as a date and time… I can’t even begin to imagine the possibilities that would open up.

I’m not the only one thinking about this. Responding to the question, what parts of the Web need to be improved or fixed in order for the Web of today to evolve into the Web of the future?, Jeff Veen writes:

I wish every device that was capable of talking to the network could send its geolocation. I’d like this to be fundamental—let’s send longitude and latitude in the HTTP header of every request. Let’s make it as ubiquitous and accessible as the time stamp, user agent, and referring URL.

This doesn’t necessarily mean that every electronic device needs to be geo-aware. As long as devices can communicate easily, you may ever only need one location-aware device. Suppose my phone has GPS or some other way of pinpointing location. As long as that device can communicate with my computer, perhaps using Bluetooth, then my computer can know my location: a very short string of two numbers. Once my computer has that data, my location can be broadcast and a whole ecosystem of services can be enhanced. Sites built around travel or events are the obvious winners but I can imagine huge benefits for music sites, photo sharing or any kind of social networking site that boils down to real-world activity.

The technology isn’t quite ubiquitous enough yet and there are privacy concerns (though the granularity of geodata negates a lot of the worst fears) but I hope that as the usefulness of geodata becomes clearer, location enhanced services can really begin to bloom.

Mashed

It sounds like Mashup Camp was a hive of very productive activity. Kevin Lawver gave a presentation on portable social networks but instead of just talking about it, he wrote some Ruby code. Kevin is using OpenID for log in, followed by hCard parsing and XFN spidering (see also: Gavin Bell’s work). Superb stuff!

Meanwhile, Plaxo is now supporting OpenID and microformats thanks to the efforts of Kaliya and Chris.

And just in case you think that this is still a niche geek thing, here are the job details for Program Manager of Internet Explorer over at Microsoft:

Does the idea of redefining the role of the Internet browser appeal to you? Do the terms HTTP, RSS, Microformats, and OpenID, excite you? If so, then this just might be the opportunity for you.

Watching the stream

Ever since I hacked up my little life stream experiment and wrote about it, it’s been very gratifying to see how people have taken the idea and run with it. Emily Chang has written about the resources she came across when she was putting her life stream together. Sam Sethi has been talking about life streams as a rich vein of attention data (which reminded me of John Allsopp’s thoughts on why blogging as we know it is over).

Of course this idea of mashing up time-stamped (micro)content—usually through RSS—isn’t anything new. Tom Armitage touched on this during his presentation at Reboot in Copenhagen last year:

Whenever I publish anything with a date attached, there’s a framework for ongoing narrative. The item published is our narrative, but the date gives it ongoingness. It takes time for the pattern to emerge; initially, throwing data at that black box, it seems random. For instance: I upload photos to Flickr at arbitrary intervals. I go silent on my blog without explanation. It may seem, in the short-term, like a blip, but in the long-term, it’s an important part of my story. My blog is full of delicious bookmarks right now because I’ve been busy at work, and writing this talk. That’ll be reflected in the longer game, when I write my post-Reboot blog entry, and suddenly the pattern becomes clear.

If you haven’t yet done so, I strongly urge you to read the rest of Telling Stories — What Homer, Dickens, and Comic Books can teach your (social) software. It’s quite brilliant and discusses many issues that are even more relevant today with the rise of OpenID and the clamour for portable social networks.

Jeff Croft has been pioneering the life stream idea for quite a while now, originally calling it a tumblelog. His implementation uses APIs rather than plain ol’ RSS. He’s right in thinking that APIs are a more robust solution for long-term archiving but I think of my life stream as being a fleeting snapshot of current activity.

As Jeff points out:

The result is that most people’s lifestream looks great for the first several days back, but then get all sparse at the bottom, where only one or two sources are still providing information.

CSS to the rescue. I’ve updated my life stream to give vibrant colours to newer entries and faded, eventually illegible colours to older, less relevant content. It’s kind of like Shaun’s recent experiments with age and colour.

I love APIs but when something as simple as RSS does the job, I’ll go for the simple solution every time (hence my love of microformats). In fact, I see RSS as being a kind of low-level short-term API or, as Rob Purdie put it, the vaseline of Web 2.0.

The ubiquity of RSS is what makes Yahoo Pipes possible. Now anybody can make a life stream by plugging in some RSS feeds into a pipe. Here’s one I made earlier. When I tried to do this a few days ago, I couldn’t get it to sort by date properly: it was sorting the pubDate field alphabetically—that seems to be fixed now.

Using Yahoo Pipes isn’t quite as straightforward as it could be. It still feels kind of techy and intimidating for non-geeks. This is the same problem that Ning used to have. Its services were ostensibly being provided so that non-techy people could start mashing stuff up but the presentation was impenetrably techy. That’s all changed now.

Ning has completely rebranded as a social network builder. Personally, I think this is a brilliant move. After just a few seconds on the front page, it’s absolutely clear what you can do. By providing example sites, they make the point even clearer. You can still make all the same stuff that you always could on Ning—videos, photos, blogs—but now it’s all wrapped up as part of a clearer goal: creating your own social network site.

When Yahoo Pipes launched, it looked like it might be competing directly with Ning. Now that’s not the case. The two services have diverged and are concentrating on different tasks for different audiences.

I’ll be keeping an eye on Ning to see how it deals with the issue of portable social networks. I’ll be watching Yahoo Pipes as a tool for creating life streams.

Streaming my life away

I’ve been playing around with Twitter, a neat little service from the people who brought you Odeo. You send it little text updates via SMS, the website, or Jabber. It’s intended as a piece of social software, but I think it has potential for more selfish uses.

Every time I ping Twitter, the message is time stamped. Every time I post a link to Del.icio.us, that’s time stamped. Every time I upload a picture to Flickr, a time stamp of when the picture was taken is also sent. Whenever I listen to a song on iTunes, the track information is sent to Last.fm with a time stamp. And of course whenever I blog, be it here, at the DOM Scripting blog or Principia Gastronomica, each entry has a permalink and a time stamp.

Just about every time somebody publishes something on the Web, it gets time stamped. Wouldn’t it be nice to pull in all these disparate bits of time stamped information and build up a timeline of online activity?

The technology is already in place. Most of the services I mention above have APIs. In this case, a fully-blown API isn’t even necessary. Each service already offers an easily parsable XML file of activity ordered by time: RSS.

At the recent Take Back The Web event here in Brighton, Rob Purdie talked about RSS being the vaseline that’s greasing the wheels of Web 2.0. He makes a good point.

Over the course of any particular day, I could be updating five or six RSS feeds, depending on how much I’m blogging, how many links I’m posting, or how much music I’m listening to. I’d like to take those individual feeds and mush ‘em all up together.

There are a couple of services out there for mashing up RSS. FeedBurner is probably the most well known, but you are limited to a pre-set choice of RSS feeds that you can mix in. RSS Mix offers a more open-ended splicing service but it seems a bit confused when it comes to date ordering. There’s some other service I was playing around with last week but for the life of me, I can’t remember the name of it. All I remember is that it had an extremely annoying interface full of gratuitous Ajax.

I’ve mocked up my own little life stream, tracking my Twitter, Flickr, Del.icio.us, Last.fm, and blog posts. It’s a quick’n’dirty script that isn’t doing any caching. The important thing is that it’s keeping the context of the permalinks (song, link, photo, or blog post) and displaying them ordered by date and time. What I’d really like to do is display the same information in a more time-based interface: a calendar, or timeline.

Annoyingly, the Last.fm feed of recently listened to tracks disappears if you don’t listen to anything for a while. Grrr…

Update: Here’s the PHP source code.

Virtual trainspotting

The second day of BarCamp London is going great — I’m amazed a the energy and enthusiasm after a night of very little sleep for everyone. The lack of sleep can be attributed to Simon and his damn Werewolf game.

I’ve just seen the most wonderful presentation from the excellent Matthew Somerville. He works on They Work For You… and I just found out that he’s the guy who did the renegade accessible Odeon site!

He’s built a fantastic mashup of maps and train times. Maybe I shouldn’t be drawing attention to it because he’s getting the data by screen-scraping — because there is no National Rail API — but damn, this is sweet! You can find out when they’re due to arrive at a station. You can see the trains moving along the map. Click the checkbox to speed up the movement by ten.

See how Brighton is in the drop-down list of stations? Matthew added that in the middle of the presentation in response to my request. After all, I need to get back down to Brighton later today.

When mashups attack

In all the many mashups out there, Google Maps is probably the most used API (version 2 is out now).

One of the latest in the long line of map mixes is Galker Stalker. It takes user-submitted celebrity sightings and displays them on a map of Manhattan.

Has Nick Denton gone too far this time? George Clooney certainly thinks so. Of course, for a site like Gawker, any publicity is good publicity. Jessica and Jesse are just so excited that George Clooney has noticed their existence.

Upcoming webolution

At the risk of becoming API-watch Central, I feel I must point out some nifty new features that have been added to Upcoming.org.

Andy and the gang have been diligently geotagging events using Yahoo’s geocoder API. Best of all, these latitude and longitude co-ordinates are now also being exposed through the API. Methinks Adactio Austin won’t be the last mashing up of event and map data I’ll be doing.

On the Upcoming site itself, you can now limit the number of attendees for an event, edit any venues you’ve added and edit your comments. This comes just a few days after Brian Suda mentioned in a chat that he would like to have the option to edit this comment later (right now he’s looking for somewhere to stay during XTech).

Feature wished for; feature added. This is exactly the kind of iterative, evolutionary growth that goes a long way towards what Kathy Sierra calls creating passionate users. By all accounts, her panel at South by Southwest was nothing short of outstanding. Everyone I spoke to who attended was raving about it for days. Muggins here missed it but I have a good excuse. I was busy signing freshly-purchased books, so I can’t complain.

Talking about microformats

My Adactio Austin mashup proved to be very useful during South by Southwest. It was very handy having instant access to the geographical location of the next party.

Austin being Austin, I didn’t have to worry much about getting online: the city is swimming/drenched/floating/saturated in WiFi. After attending Tantek’s birthday celebrations at La Sol Y La Luna restaurant, which is not located downtown, a bunch of us stood on the street and began hailing taxis to get back into the town centre. In an attempt to ascertain exactly where we needed to tell the cab driver to take us in order to reach the next party, I whipped out my iBook, hoping for a net connection. There were five networks. That’s my kind of town.

While I had anticipated that Adactio Austin would make the evenings run smoother, I had planned on it affecting my daytime activities. As it turned out, my little experiment landed me a place on a panel.

When Aaron and I were preparing our DOM Scripting presentation for this year’s conference, I made sure that we nabbed ourselves a slot on the first day. I wanted to get the work out of the way so that I could relax for the rest of the conference. It was a good plan but the use of in my mashup prompted Tantek to ask me to sit it on his Monday morning panel. That’s how I found myself sitting behind a microphone together with Tantek, Chris and Norm, talking about the practical implementations of hCard and hCalendar.

I have to say it was one of the most relaxing and enjoyable talks I’ve ever given. We began the morning in a cafe geeking out about microformats, then we were in the green room geeking out about microformats and finally we were on stage geeking out about microformats. The movement from one location to the other went so smoothly that I felt as relaxed on the panel as I did in the cafe. I’m really glad Tantek asked me to say a few words.

Mind you, I probably came across as a complete booze hound. Tantek talked about the philosophy behind microformats, Chris talked about the tails extension for Flock, Norm talked about microformats at Yahoo! Europe… and I talked about where to go to get free beer. At this stage, I had also been doing some practical research in the field so I suspect my voice was somewhat raspy.

It was really interesting to compare the change in the perception of microformats within the space of one year. At South by Southwest 2005, there were two standout presentations for me: Eric and Tantek independently gave talks about this new fangled idea called microformats. At the time, I hadn’t even heard of the concept, so it was a real eye-opener for me. This year, microformats were a recognised, exciting technology. One week after SXSW, Bill Gates announced that “We need microformats”. That’s a lot of recognition.

From my experiences with my own humble experiments, I think there’s a lot of value to be had with mixing up events (using hCalendar) and mapping (using ). Throw into the mix and you’ve got some pretty big steps towards a good lowercase semantic web.

Think about it: if you’ve got some kind of application that’s native to a web of data (as Tom so succinctly puts it), you’ve already got addressable objects (using the most basic RESTful interface of all: URLs). Now, if you can add geographical, temporal or semantic data to those resources (using geotagging, hCalendar, and tagging, respectively), you can increase the value of that data exponentially. Just think of all the mashup potential of that content.

Dammit! In hindsight, I wish I could have nabbed Tom, Reverend Dan, Thomas and Tantek in Austin to have an impromptu brainstorm in the corridor about this stuff.