Journal tags: hacking

32

Indie webbing

The past weekend’s Indie Web Camp Brighton was wonderful! Many thanks to Mark and Paul for all their work putting it together.

There was a great turn-out. It felt like the perfect time for an Indie Web Camp. There’s a real appetite for getting away from ever more extractive silos and staking claim to our own corners of the web. Most of the attendees were at their first ever Indie Web Camp.

Paul asked me to oversee the schedule planning on day one, which I was happy to do. We made sure that first-timers got first dibs on proposing sessions. In the end, every single session was proposed by new attendees.

Day two was all about putting ideas into practice: coding, designing, and writing on our own website. I’m always blown away by how much gets done in just one short day. Best of all is when there’s someone who starts the weekend without their own website but finishes with a live site. That happened again this time.

I spent the second day tinkering with something I started at Indie Web Camp Nuremberg in October. Back then, I got related posts working here on my journal; a list of suggested follow-up posts to read based on the tags of the current post.

I wanted to do the same for my links; show links related to the one I’m currently linking to. It didn’t take too long to get that up and running.

But then I thought about it some more and realised it would be good to also show blog posts related to the link. So I did that. Then I realised it would be really good to show related links under blog posts too.

So now, if everything’s working correctly, then at the end of this post you will not only see related blog posts I’ve previously written, but also links related to the content of this post.

It was a very inspiring weekend. There’s something about being in a room with other people working on their websites that makes me super productive.

While we were hacking away on day two, somebody mentioned that they still find hard to explain the indie web to people.

“It’s having your own website”, I said.

But surely there’s more to it than that, they wondered.

Nope. If someone has their own website, then they’re part of the indie web. It doesn’t matter if that website is made with a complicated home-rolled tech stack or if it’s a Squarespace site.

What you do with your own website is entirely up to you. The technologies are just plumbing wether it’s webmentions, RSS, or anything else. None of it is a requirement. Heck, even HTML is optional. If you want to put plain text files on your website, go for it. It’s your website.

Indie Web Camp Nuremberg

After two days at border:none in Nuremberg, it was time for two days at Indie Web Camp, also in Nuremberg.

I hadn’t been to an Indie Web Camp since before The Situation. It felt very good to be back. I had almost forgotten how inspiring and productive they can be.

This one had a good turnout of around twenty people. We had ourselves an excellent first day of thought-provoking sessions. Then on day two it was time to put some of those ideas into action.

A little trick I like to do on the practical day is to have two tasks to attempt: one of them quite simple, and the other more ambitious. That way, as long as I get the simpler task done, I’ll always have at least something to demo at the end of the day.

This time I attempted three bits of home improvement on my website.

Autolinking Mastodon usernames

The first problem I set myself was ostensibly the simple one. But it involved regular expressions, so then I had two problems.

I wanted to automatically link up Mastodon usernames if I mentioned one in my notes. For example, during border:none I mentioned Brian’s mastodon username in a note: @briansuda@loðfíll.is.

That turned out to be an excellent test case. Those Icelandic characters made sure I wasn’t making unwarranted assumptions about character sets.

Here’s the regular expression I came up with. It’s not foolproof by any means. Basically it looks for @something@something.something.

Good enough. Ship it.

Related posts

My next task was a bit more ambitious. It involved SQL queries, something I’m slightly better at than regular expressions but that’s a very low bar.

I wanted to show related posts when you get to the end of one of my blog posts.

I’ve been tagging all my blog posts for years so that’s the mechanism I used for finding similar posts. There’s probably a clever SQL statement that could do this, but I ended up brute-forcing it a bit.

I don’t feel too bad about the hacky clunky nature of my solution, because I cache blog post pages. That means only the first person to view the blog post (usually me) will suffer any performance impacts from my clunky database queries. After that everything’s available straight from a cached file.

Let’s say you’re reading a blog post of mine that I’ve tagged with ten different keywords. I make a separate SQL query for each keyword to get all the other posts that use that tag. Then it’s a matter of sorting through all the results.

I loop through the results of each tag and apply a score to the tagged post. If the post shares one tag with the post you’re looking at, it has a score of one. If it shares two tags, it has a score of two, and so on.

I decided that for a post to be considered related, it had to share at least three tags. I also decided to limit the list of related posts to a maximum of five.

It worked out pretty well. If you scroll down on my recent post about JavaScript, you’ll see links to related posts about JavaScript. If you read through a post on accessibility testing, you’ll find other posts about accessibility testing. If you make it to the end of this post about Mars colonisation you’ll see links to more posts about exploring our solar system.

Right now I’m just doing this for my blog but I’d like to do it for my links too. A job for a future Indie Web Camp.

Link rot

I was very inspired by Remy’s recent post on how he’s tackling link rot on his site. I wanted to do the same for mine.

On the first day at Indie Web Camp I led a session on link rot to gather ideas and alternative approaches. We had a really good discussion, though it’s always worth bearing in mind that there’ll never be a perfect solution. There’ll always be some false positives and some false negatives.

The other Jeremy at Indie Camp Nuremberg blogged about the session. Sebastian Greger was attending remotely and the session inspired him to spend the second day also tackling linkrot.

In the end I decided to stick with Remy’s two-pronged approach:

  1. a client-side script that—as a progressive enhancement—intercepts outbound links and re-routes them to
  2. a server-side script that redirects to the Internet Archive if the link is broken.

Here’s the JavaScript I wrote for the first part.

It’s very similar to Remy’s but with one little addition. I check to see if the clicked link is inside an h-entry and if it is, I pass on the date from the post’s dt-published value.

Here’s the PHP I wrote for the server-side redirector. The comments tell the story of what the code is doing:

  • Check that the request is coming from my site.
  • There also has to be a URL provided in the query string.
  • Make a very quick curl request to get the response headers from the URL. The time limit is set to 1 second.
  • If there was any error (like a time out), give up and go to the URL.
  • Pick the response headers apart to get the HTTP status code.
  • If the response is OK, go to the URL.
  • If the response is a redirect, go around again but this time use the redirect URL.
  • Construct the archive.org search endpoint.
  • If we have a date, provide it. Otherwise ask for the latest snapshot.
  • Ping that archive.org URL. This time there’s no time limit; this might take a while.
  • If there’s an archived copy, redirect to that.
  • There’s no archived copy. Give up and go the URL anyway.

Not perfect by any means, but it works for the most common cases of link rot.

For the demo at the end of the day I went back into my archive of over 10,000 links and plucked out some old posts, like this one from December 2005. It takes a little while to do the rerouting but eventually you get to see the archived version from the same time period as when I linked to it.

Here’s another link from 2005. Here’s another. Those links are broken now, but with a little patience, you’ll still get to read them on the Internet Archive.

The Internet Archive’s wayback machine really is a gift. I can’t imagine how would it be even remotely possible to try to address link rot on my site without archive.org.

I will continue to donate money to the Internet Archive and I encourage you to do the same.

Replies

Last week was a bit of an event whirlwind. In the space of seven days I was at Indie Web Camp, Beyond Tellerrand, and Accessibility Club in Düsseldorf, followed by a train ride to Utrecht for Frontend United. Phew!

Indie Web Camp Düsseldorf was—as always—excellent. Once again, Sipgate generously gave us the use of their lovely, lovely space for the weekend. We had one day of really thought-provoking discussions, followed by a day of heads-down hacking and making.

I decided it was time for me to finally own my replies. For a while now, I’ve been posting notes on my own site and syndicating to Twitter. But whenever I replied to someone else’s tweet, I did from Twitter. I wanted to change that.

From a coding point of view, it wasn’t all that tricky. The real challenges were to do with the interface. I needed to add another field for the URL I’m replying to …but I didn’t want my nice and minimal posting interface to get too cluttered. I ended up putting the new form field inside a details element with a summary of “Reply to” so that the form field would be hidden by default, and toggled open by hitting that “Reply to” text:

<details>
    <summary>
        <label for="replyto">Reply to</label>
    </summary>
    <input type="url" id="replyto" name="replyto">
</details>

I sent my first test reply to a post on Aaron’s website. Aaron was sitting next to me at the time.

Once that was all working, I sent my first reply to a tweet. It was a response to a tweet from Tantek. Tantek was also sitting next to me at the time.

I spent most of the day getting that Twitter syndication to work. I had something to demo, but I foolishly decided to risk it all by attempting to create a bookmarklet so that I could post directly from a tweet page (instead of hopping back to my own site in a different tab). By canabalising the existing bookmarklet I use for posting links, I just about managed to get it working in time for the end of day demos.

So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.

Then again, I kind of like how wonderfully random and out-of-context they look. You can browse through all my replies so far.

I’m glad I got this set up. Now when Andy posts stuff on Twitter, I’m custodian of my responses:

@AndyBudd: Who are your current “Design Heroes”?

adactio.com: I would say Falcor from Neverending Story, the big flying dog.

Timelines of the web

Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.

Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A tags.

So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.

The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.

Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.

The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.

So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.

I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.

Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I marked up each timeline as an ordered list of h-events:

<li class="h-event y1968">
  <a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
    <time class="dt-start" datetime="1968-12-09">1968</time>
    <abbr class="p-name" title="oN-Line System">NLS</abbr>
  </a>
</li>

With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.

The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc(). I wanted to use translateX() but I couldn’t get the maths to work for that, so I had use plain ol’ left and right:

.y1968 {
  left: calc((1968 - 1959) * (100%/30) - 5em);
}

For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:

.y2014 {
  right: calc((2019 - 2014) * (100%/30) - 5em);
}

(Each h-event has a width of 5em so that’s where the extra bit at the end comes from.)

I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.

As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.

Et voilà!

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:

1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.

1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.

1982: Literary Machines, Ted Nelson’s book which was on our desk at all times

I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.

The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.

Hello WorldWideWeb.

WorldWideWeb

Nine people came together at CERN for five days and made something amazing. I still can’t quite believe it.

Coming into this, I thought it was hugely ambitious to try to not only recreate the experience of using the first ever web browser (called WorldWideWeb, later Nexus), but to also try to document the historical context of the time. Now that it’s all done, I’m somewhat astounded that we managed to achieve both.

Want to see the final result? Here you go:

worldwideweb.cern.ch

That’s the website we built. The call to action is hard to miss:

Launch WorldWideWeb

Behold! A simulation of using the first ever web browser, recreated inside your web browser.

Now you could try clicking around on the links on the opening doucment—remembering that you need to double-click on links to activate them—but you’ll quickly find that most of them don’t work. They’re long gone. So it’s probably going to be more fun to open a new page to use as your starting point. Here’s how you do that:

  1. Select Document from the menu options on the left.
  2. A new menu will pop open. Select Open from full document reference.
  3. Type a URL, like, say https://adactio.com
  4. Press that lovely chunky Open button.

You are now surfing the web through a decades-old interface. Double click on a link to open it. You’ll notice that it opens in a new window. You’ll also notice that there’s no way of seeing the current URL. Back then, the idea was that you would navigate primarily by clicking on links, creating your own “associative trails”, as first envisioned by Vannevar Bush.

But the WorldWideWeb application wasn’t just a browser. It was a Hypermedia Browser/Editor.

  1. From that Document menu you opened, select New file…
  2. Type the name of your file; something like test.html
  3. Start editing the heading and the text.
  4. In the main WorldWideWeb menu, select Links.
  5. Now focus the window with the document you opened earlier (adactio.com).
  6. With that window’s title bar in focus, choose Mark all from the Links menu.
  7. Go back to your test.html document, and highlight a piece of text.
  8. With that text highlighted, click on Link to marked from the Links menu.

If you want, you can even save the hypertext document you created. Under the Document menu there’s an option to Save a copy offline (this is the one place where the wording of the menu item isn’t exactly what was in the original WorldWideWeb application). Save the file so you can open it up in a text editor and see what the markup would’ve looked it.

I don’t know about you, but I find this utterly immersive and fascinating. Imagine what it must’ve been like to browse, create, and edit like this. Hypertext existed before the web, but it was confined to your local hard drive. Here, for the first time, you could create links across networks!

After five days time-travelling back thirty years, I have a new-found appreciation for what Tim Berners-Lee created. But equally, I’m in awe of what my friends created thirty years later.

Remy did all the JavaScript for the recreated browser …in just five days!

Kimberly was absolutely amazing, diving deep into the original source code of the application on the NeXT machine we borrowed. She uncovered some real gems.

Of course Mark wanted to make sure the font was as accurate as possible. He and Brian went down quite a rabbit hole, and with remote help from David Jonathan Ross, they ended up recreating entire families of fonts.

John exhaustively documented UI patterns that Angela turned into marvelous HTML and CSS.

Through it all, Craig and Martin put together the accomanying website. Personally, I think the website is freaking awesome—it’s packed with fascinating information! Check out the family tree of browsers that Craig made.

What a team!

Back at CERN

We got the band back together.

In September of 2013, I had the great pleasure and privilege of going to CERN with a bunch of very smart people. I’m not sure how I managed to slip by. We were there to recreate the experience of using the line-mode browser. As I wrote at the time:

Just to be clear, the line-mode browser wasn’t the world’s first web browser. That honour goes to Tim Berners-Lee’s WorldWideWeb programme. But whereas WorldWideWeb only ran on NeXT machines, the line-mode browser worked cross-platform and was, therefore, instrumental in demonstrating the power of the web as a universally-accessible medium.

In the run-up to the 30th anniversary of the original (vague but exciting) proposal for what would become the World Wide Web, we’ve been invited back to try to recreate the experience of using that first web browser, the one that one ever ran on NeXT machines.

I missed the first day due to travel madness—flying back from Interaction 19 in Seattle during snowmageddon to Heathrow and then to Geneva—but by the time I arrived, my hackmates had already made a great start in identifying the objectives:

  1. Give people an understanding of the user experience of the WorldWideWeb browser.
  2. Demonstrate that a read/write philosophy was there from the beginning.
  3. Give context—what was going on at the time?

That second point is crucial. WorldWideWeb wasn’t just a web browser; it was a browser/editor. That’s by far the biggest change in terms of the original vision of the web and what we ended up getting from Mosaic onwards.

Remy is working hard on the first point. He documented the first day and now on the second day, he’s made enormous progress already.

I’m focusing on point number three. I want to show the historical context for the World Wide Web. Here’s my plan…

Seeing as we’re coming up on the thirtieth anniversary, I thought it would be interesting to take the year of the proposal (1989) and look back in a time cone of thirty years previous to that at the influences on Tim Berners-Lee. I also want to look at what has happened with the web in the thirty years since the proposal. So the date of the proposal will be a centre point, with the timespan of 1959-1989 converging on it from the past, and the timespan of 1989-2019 diverging from it into the future. I hope it could make for a nice visualisation. Maybe I could try to get it look like data from a particle collision.

We’re here till the weekend and everyone else has already made tremendous progress. Kimberly has been hacking the Gibson …well, that’s what it looked like when she was deep in the code of the NeXT machine we’ve borrowed from Musée Bolo (merci beaucoup!).

We took a little time out for a tour of the data centre. Oh, and at lunch time, we sat with Robert Cailliau and grilled him with questions about the birth of the web. Quite a day!

Now it’s time for me to hit the hay and prepare for another day of hacking in this extraordinary place.

The road to Indie Web Camp LA

After An Event Apart San Francisco, which was—as always—excellent, it was time for me to get to the next event: Indie Web Camp Los Angeles. But I wasn’t going alone. Tantek was going too, and seeing as he has a car—a convertible, even—what better way to travel from San Francisco to LA than on the Pacific Coast Highway?

It was great—travelling through the land of Steinbeck and Guthrie at the speed of Kerouac and Springsteen. We stopped for the night at Pismo Beach and then continued on, rolling into Santa Monica at sunset.

Half Moon Bay. Roadtripping with @t. Pomponio beach. Windswept. Salinas. Refueling. Driving through the Californian night. Pismo Beach. On the beach. On the beach with @t. Stopping for a coffee in Santa Barbara. Leaving Pismo Beach. Chevron. Santa Barbara steps. On the road. Driving through Malibu. Malibu sunset. Sun worshippers. Sunset in Santa Monica.

The weekend was spent in the usual Indie Web Camp fashion: a day of BarCamp-style discussions, followed by a day of hacking on our personal websites.

I decided to follow on from what I did at the Brighton Indie Web Camp. There, I made a combined tag view—a way of seeing, for example, everything tagged with “indieweb” instead of just journal entries tagged with “indieweb” or links tagged with “indieweb”. I wanted to do the same thing with my archives. I have separate archives for my journal, my links, and my notes. What I wanted was a combined view.

After some hacking, I got it working. So now you can see combined archives by year, month, and day (I managed to add a sparkline to the month view as well):

I did face a bit of a conundrum. Both my home page stream and my tag pages show posts in reverse chronological order, with the newest posts at the top. I’ve decided to replicate that for the archive view, but I’m not sure if that’s the right decision. Maybe the list of years should begin with 2001 and end with 2016, instead of the other way around. And maybe when you’re looking at a month of posts, you should see the first posts in that month at the top.

Anyway, I’ll live with it in reverse chronological order for a while and see how it feels. I’m just glad I managed to get it down—I’ve been meaning to do it for quite a while. Once again, I’m amazed by how much gets accomplished when you’re in the same physical space as other helpful, motivated people all working on improving their indie web presence, little by little.

Greetings from Indie Web Camp LA. Indie Web Camping. Hacking away. Day two of Indie Web Camp LA.

Indie Web Camp Brighton 2016

Indie Web Camp Brighton 2016 is done and dusted. It’s hard to believe that it’s already in its fifth(!) year. As with previous years, it was a lot of fun.

IndieWebCampBrighton2016

The first day—the discussions day—covered a lot of topics. I led a session on service workers, where we brainstormed offline and caching strategies for personal websites.

There was a design session looking at alternatives to simply presenting everything in a stream. Some great ideas came out of that. And there was a session all about bookmarking and linking. That one really got my brain whirring with ideas for the second day—the making/coding day.

I’ve learned from previous Indie Web Camps that a good strategy for the second day is to have two tasks to tackle: one that’s really easy (so you’ve at least got that to demo at the end), and one that’s more ambitious. This time, I put together a list of potential goals, and then ordered them by difficulty. By the end of the day, I managed to get a few of them done.

First off, I added a small bit of code to my bookmarking flow, so that any time I link to something, I send a ping to the Internet Archive to grab a copy of that URL. So here’s a link I bookmarked to one of Remy’s blog posts, and here it is in the Wayback Machine—see how the date of storage matches the date of my link.

The code to do that was pretty straightforward. I needed to hit this endpoint:

http://web.archive.org/save/{url}

I also updated my bookmarklet for posting links so that, if I’ve highlighted any text on the page I’m linking to, that text is automatically pasted in to the description.

I tweaked my webmentions a bit so that if I receive a webmention that has a type of bookmark-of, that is displayed differently to a comment, or a like, or a share. Here’s an example of Aaron bookmarking one of my articles.

The more ambitious plan was to create an over-arching /tags area for my site. I already have tag-based navigation for my journal and my links:

But until this weekend, I didn’t have the combined view:

I didn’t get around to adding pagination. That’s something I should definitely add, because some of those pages get veeeeery long. But I did spend some time adding sparklines. They can be quite revealing, especially on topics that were hot ten years ago, but have faded over time, or topics that have becoming more and more popular with each year.

All in all, a very productive weekend.

Save the dates for Indie Web Camp Brighton 2016

September 24th and 25th—those are the dates you should put in your diary. That’s when this year’s Indie Web Camp Brighton is happening.

Once again it’ll be at 68 Middle Street, home to Clearleft. You can register for free now, and then add your name to the list of participants on the wiki.

If you haven’t been to an Indie Web Camp before, it’s a very straightforward proposition. The idea is that you should have your own website. That’s it. Every thing else is predicated on that. So while there’ll be plenty of discussions, demos, and designs, they’re all in service to that fundamental premise.

The first day of an Indie Web Camp is like a BarCamp. We make a schedule grid at the start of the day and people organise topics by room and time slot. It sounds chaotic. It is chaotic. But it works surprisingly well. The discussions can be about technologies, or interfaces, or ideas, or just about anything really.

The second day is for making. After the discussions from the previous day, most people will have a clear idea at this point for something they might want to do. It might involve adding some new technology to their website, or making some design changes, or helping build a tool. For people starting from scratch, this is the perfect time for them to build and launch a basic website.

At the end of the second day, everyone demos what they’ve done. I’m always amazed by how much people can accomplish in just one weekend. There’s something about having other people around to help you that makes it super productive.

You might be thinking “but I’m not a coder!” Don’t worry—there’ll be plenty of coders there so you can get their help on whatever you might decide to do. If you’re a designer, your skills will be in high demand by those coders. It’s that mish-mash of people that makes it such a fun gathering.

Last year’s Indie Web Camp Brighton was lots of fun. Let’s make Indie Web Camp Brighton 2016 even better!

Indie Web Camp Brighton group photo

Notice

We’ve been doing a lot of soul-searching at Clearleft recently; examining our values; trying to make implicit unspoken assumptions explicit and spoken. That process has unearthed some activities that have been at the heart of our company from the very start—sharing, teaching, and nurturing. After all, Clearleft would never have been formed if it weren’t for the generosity of people out there on the web sharing with myself, Andy, and Richard.

One of the values/mottos/watchwords that’s emerging is “Share what you learn.” I like that a lot. It echoes the original slogan of the World Wide Web project, “Share what you know.” It’s been a driving force behind our writing, speaking, and events.

In the same spirit, we’ve been running internship programmes for many years now. John is the latest of a long line of alumni that includes Anna, Emil, and James.

By the way—and this should go without saying, but apparently it still needs to be said—the internships are always, always paid. I know that there are other industries where unpaid internships are the norm. I’ve even heard otherwise-intelligent people defend those unpaid internships for the experience they offer. But what kind of message does it send to someone about the worth of their work when you withhold payment for it? Our industry is young. Let’s not fall foul of the pernicious traps set by older industries that have habitualised exploitation.

In the past couple of years, Andy concocted a new internship scheme:

So this year we decided to try a different approach by scouring the end of year degree shows for hot new talent. We found them not in the interaction courses as we’d expected, but from the worlds of Product Design, Digital Design and Robotics. We assembled a team of three interns , with a range of complementary skills, gave them a space on the mezzanine floor of our new building, and set them a high level brief.

The first such programme resulted in Chüne. The latest Clearleft internship project has just come to an end. The result is Notice.

This time ‘round, the three young graduates were Chloe, Chris and Monika. They each have differing but complementary skill sets: Chloe is a user interface designer; Chris is a product designer; Monika is an artist who knows her way around hardware hacking and coding.

I’ll miss having this lot in the Clearleft office.

Once again, they were set a fairly loose brief. They should come up with something “to enrich the lives of local residents” and it should have a physical and digital component to it.

They got stuck in to researching and brainstorming ideas. At the end of each week, we’d all gather together to get a playback of what they were coming up with. It was at these playbacks that the interns were introduced to a concept that they will no doubt encounter again in their professional lives: seagulling AKA the swoop and poop. For once, it was the Clearlefties who were in the position of being swoop-and-poopers, rather than swoop-and-poopies.

Playback at Clearleft

As the midway point of the internship approached, there were some interesting ideas, but no clear “winner” to pursue. Something else was happening around this time too: dConstruct 2015.

Chloe, Monika and Chris at dConstruct

The interns pitched in with helping out at the event, and in return, we kidnapped some of the speakers—namely John Willshire and Chris Noessel—to offer them some guidance.

There was also plenty of inspiration to be had from the dConstruct talks themselves. One talk in particular struck a chord: Dan Hill’s The City Of Things …especially the bit where he railed against the terrible state of planning application notices:

Most of the time, it ends up down the bottom of the lamppost—soiled and soggy and forgotten. This should be an amazing thing!

Hmm… sounds like something that could enrich the lives of local residents.

Not long after that, Matt Webb came to visit. He encouraged the interns to focus in on just the two ideas that really excited them rather then the 5 or 6 that they were considering. So at the next playback, they presented two potential projects—one about biking and the other about city planning. They put it to a vote and the second project won by a landslide.

That was the genesis of Notice. After that, they pulled out all the stops.

Exciting things are afoot with the @Clearleftintern project.

Not content with designing one device, they came up with a range of three devices to match the differing scope of planning applications. They set about making a working prototype of the device intended for the most common applications.

Monika and Chris, hacking

Last week marked the end of the project and the grand unveiling.

Playing with the @notice_city prototype. Chris breaks it down. Playback time. Unveiling.

They’ve done a great job. All the details are on the website, including this little note I wrote about the project:

This internship programme was an experiment for Clearleft. We wanted to see what would happen if you put through talented young people in a room together for three months to work on a fairly loose brief. Crucially, we wanted to see work that wasn’t directly related to our day-­to-­day dealings with web design.

We offered feedback and advice, but we received so much more in return. Monika, Chloe, and Chris brought an energy and enthusiasm to the Clearleft office that was invigorating. And the quality of the work they produced together exceeded our wildest expectations.

We hereby declare this experiment a success!

Personally, I think the work they’ve produced is very strong indeed. It would be a shame for it to end now. Perhaps there’s a way that it could be funded for further development. Here’s hoping.

Out on the streets of Brighton Prototype

As impressed as I am with the work, I’m even more impressed with the people. They’re not just talented and hard work—they’re a jolly nice bunch to have around.

I’m going to miss them.

The terrific trio!

Small independent pieces, loosely joined

It was fascinating at Indie Web Camp Germany to see how much could be accomplished by taking some pre-existing small things and loosely joining them.

For example, there are already webmention and micropub plug-ins for quite a few CMSs. If you’re using Wordpress or Jekyll, you can get pretty far pretty quickly by making use of what people have already provided. And after that Indie Web Camp, you can add Drupal and Kirby to the list of CMSs with readily-available components.

I was somewhat surprised—and very pleased—that people made use of some little PHP snippets that I had posted as gists. I deliberately posted them as gists to show how minimal and barebones the code could be—no need for a whole project, or installers, or dockering the node to yeoman the gulp, or whatever it is the cool kids do these days.

This modular approach also worked well for interface elements. Glenn and Aaron worked on separate projects to create small JavaScript enhancements for posting interfaces. Assemble enough of these enhancements together and before you know it, you’ve got something approaching Medium.

By the end of the second day, I was amazed to see how much progress people had made. Like Johannes says:

I was pretty impressed by how much people got done. At the final demo session, everyone had something he or she had done to update their website – although I’m pretty sure that the end of this event will not be the end of their efforts to try and own their stuff online.

It was quite inspiring. In fact, I think I’ve been inspired to have an Indie Web Camp in Brighton. I’m thinking we could have it at the same time as Indie Web Camp Portland, which is on July 11th and 12th.

Save the dates.

100 words 049

The second day of Indie Web Camp Germany was really productive. It was amazing to see how much could be accomplished in just one day of collaborative hacking—people were posting to Twitter from their own site, sending webmentions, and creating their own micropub endpoints.

I made a little improvement to the links section of my site. Now every time I link to something, I check to see if it accepts webmentions and if it does, I ping it to let you know that I’ve linked to it.

I’ve posted the code as a gist. Feel free to use it.

Hackfarming Blood Buddies

Every year at Clearleft, there’s a week where we step away from client work, go off the grid, and disappear into the countryside to work on something fun. We call it Hack Farm.

Hack Farm usually takes place around November, but due to various complexities, Hack Farm 2014 wound up getting pushed back to the start of 2015. Last week we formed a convoy, stocked up on the bare essentials (food, post-it notes, and booze), and drove west for four hours until we were in Herefordshire at a place called The Colloquy—a return to the site of the first ever Hack Farm.

Arrival at The Colloquy.

I kept notes on each day.

Day Zero

We arrive in the late afternoon, settle into our respective rooms, and eat some wonderful home-cooked food. After dinner, even though everyone’s pretty knackered, we agree that it’s best to figure out what everyone will be working on for the next few days.

Everyone gets a chance to pitch their ideas, and then we all do some dot-voting to whittle down the options. In short order, we arrive at four different projects for four different teams.

One of my ideas is chosen. This is something I’ve been pitching every single year at Hack Farm, and every single year it ends up narrowly missing out. This year, it’s finally going to happen!

On my team I’ve got Rich, Batesy, Andy P, and Tessa.

Day One

We choose a room to use as our home base and begin.

We start by agreeing on a hypothesis—more of an assumption, really—that we’ll be basing everything upon:

People are more like to give blood if they are not alone.

Hypothesis

We start writing down questions that people might ask related to giving blood. Some of these questions might well turn out to be out of scope for this project, or can already be better answered by an existing service like blood.co.uk e.g.:

  • Can I give blood?
  • How often can I give blood?
  • Will it hurt?
  • How long will it take?

Other questions are potentially open to us providing answers:

  • Where can I give blood?
  • When I can I give blood?
  • Who else is giving blood?

That last one is a question that doesn’t seem to be answered anywhere else.

We brain-dump potential data sources that answered the “who”, “when”, and “where” questions? The data from blood.co.uk could potentially answer the “when” and “where” questions e.g. when and where is the next donation? Data from Twitter, Facebook, or your address book could answer the “who” questions e.g. who are you, and who are your friends?

We brainstorm potential outputs of the project. The obvious choices are a website or a native app, but there could also potentially be email, SMS, or even posters and postcards.

We think about potential incentives for the users of this service: peer pressure, gamification, bragging rights, reassurance, etc.

So there’s a lot of divergent thinking going on: at this stage, there are no bad ideas (no, really!).

We also establish the goals of the project—what we would like to see happen as a result of this service existing. The very minimum success criteria is:

Someone gives blood who hasn’t given blood before.

There’s a follow-on criteria for measuring longer-term success:

A group gives blood regularly.

We split into two groups to work on a propositional statement, then come together to merge what we came up with. Here it is:

For people who want to give blood, who need encouragement and motivation, Blood Buddies brings together people you know to make it a shared experience. That way, you’re more likely to give blood.

Unlike blood.co.uk, it frames giving blood as a shared, rewarding activity.

Proposition James and Tessa

Blood Buddies is a codename for now. The final service might have a different name, like Bluddies maybe.

After lunch, we start to work on user stories and personas. After a while, we think we’ve got a pretty clear idea for the minimal viable user journey.

Now we take a little break and stretch our legs.

A stroll through the fields.

When we regroup, we start researching technical possibilities (like Twitter authentication, GMail address book, Facebook contacts, etc.), while also throwing ideas around to do with branding, tone of voice, etc. James Box comes in and helps us out with a handy branding exercise.

In an effort the name the thing, we create a page filled with relevant words that might be combined into a name. Eventually we reach the “just fucking end it!” moment. The service is called “Blood Buddies” after all. The tagline is …drumroll… “Get plastered together!”

Meanwhile, having investigated the technical possibilities, it looks like Twitter’s API will be the easiest (relatively) to start with.

Vocabulary Kanban

We write out our epics and create a little kanban board. We have our tasks figured out:

  • implement sign-in with Twitter,
  • create a style guide,
  • mock up the homepage,
  • mock up a sign-up form,
  • and more.

Tomorrow everyone can assign a task to themselves and get cracking (some people have started already).

Day Two

After a late Superbowl night, we arise and begin tackling the day’s tasks.

I managed to get a very rudimentary Twitter sign-in working (eventually!) so now my task is to do something with the data that Twitter is returning …namely, storing it in a database. And because this relies on signing in with Twitter to get any results, this needs to get on to an actual web server as soon as possible.

Cue a day of wrangling with PHP, MySQL, OAuth, Git, Apache, SSH keys, and DNS settings …with an intermittent internet connection that drops out at the most inconvenient of times.

Andy is storyboarding the promo video that will help sell the story of Blood Buddies.

Storyboard

Meanwhile James and Tessa are hammering out a visual language for Blood Buddies. So the work is being approached from two different ends: the server side (how it works behind the scenes) and the interface (how it looks to the end user). In the middle is the user flow, and that’s what Richard is working on, also looking ahead past the minimal viable product to include features that can be added later.

By late afternoon the most basic server-side functionality is done, and the site is live at bloodbuddies.co.uk. Of course, there’s very, very, very little to see there, but at least our team can start adding themselves to the database.

So now the task is to join up the back-end functionality with the visual design and copy. As these strands come together, it feels like we’re getting back to a more collaborative phase: whereas yesterday involved lots of group activity, today was more splintered. But that’s going to change now that we’re going to join up the individual pieces into a unified interface.

Today felt quite productive considering that three out of the five people on our team are on cooking duty.

Spaghetti and meatballs Dinnertime

Day Three

Today is a day of rest. It’s a beautiful day. We go for a drive through the countryside, pop into a pub for some grub, and go walking on the hills.

Walking west to Wales.

Day Four

We’re down to just three team members today. Tessa is working on a different project and Andy is spending the day sleeping, puking, and generally recovering from a heavy night. N00b.

We get cracking on with integrating the visual design with the back-end functionality. That means bashing out some CSS. After an hour or two, we’ve got something basic in place.

While James works on refining the visuals—including a kick-ass logo—Richard is writing lots and lots of copy, and figuring out user flows.

Meanwhile I’m trying to get server-side stuff in place, fiddling with DNS and email; not my favourite activity.

Once the DNS is pointed to the Digital Ocean server, and with the Twitter sign-in working okay, we realise that we’ve actually launched! Admittedly it’s very basic and it needs plenty of refinement, but it’s a start.

We head out for the evening meal together. Just one more day to go.

The Stagg Inn

Day Five

James starts the day by finishing up his kick-ass Blood Buddies logo.

Richard is writing and editing lots of witty copy.

Andy is storyboarding a promotional video.

Rich, me, James, and Andy

I’m trying to get emails working, so that when someone you know signs up to Blood Buddies, we can email you to let you know. By lunchtime, we’ve got it all working.

Lots of the details are in place now: the logo, web fonts, an error page, a favicon …it feels good to be iterating on a live site.

Kanban progress Final day tasks

Device testing

After lunch, James, Richard, and I work on expanding out the home page. Once everything is in pretty good shape, we all come together (with Andy and Tessa) to talk about what the next steps could be after this minimum viable product.

There’s consensus that the most important step would be adding more ways of signing into the site, instead of just Twitter. Also, there’s a lot of functionality we could add if we can scrape the data from blood.co.uk

But that’s for another day. Right now we’ve got a barebones site, but it’s working.

We shipped.

The Session trad tune machine

Most pundits call it “the Internet of Things” but there’s another phrase from Andy Huntington that I first heard from Russell Davies: “the Geocities of Things.” I like that.

I’ve never had much exposure to this world of hacking electronics. I remember getting excited about the possibilities at a Brighton BarCamp back in 2008:

I now have my own little arduino kit, a bread board and a lucky bag of LEDs. Alas, know next to nothing about basic electronics so I’m really going to have to brush up on this stuff.

I never did do any brushing up. But that all changed last week.

Seb is doing a new two-day workshop. He doesn’t call it Internet Of Things. He doesn’t call it Geocities Of Things. He calls it Stuff That Talks To The Interwebs, or STTTTI, or ST4I. He needed some guinea pigs to test his workshop material on, so Clearleft volunteered as tribute.

In short, it was great! And this time, I didn’t stop hacking when I got home.

First off, every workshop attendee gets a hand-picked box of goodies to play with and keep: an arduino mega, a wifi shield, sensors, screens, motors, lights, you name it. That’s the hardware side of things. There are also code samples and libraries that Seb has prepared in advance.

Getting ready to workshop with @Seb_ly. Unwrapping some Christmas goodies from Santa @Seb_ly.

Now, remember, I lack even the most basic knowledge of electronics, but after two days of fiddling with this stuff, it started to click.

Blinkenlights. Hello, little fella.

On the first workshop day, we all did the same exercises, connected things up, getting them to talk to the internet, that kind of thing. For the second workshop day, Seb encouraged us to think about what we might each like to build.

I was quite taken with the ability of the piezo buzzer to play rudimentary music. I started to wonder if there was a way to hook it up to The Session and have it play the latest jigs, reels, and hornpipes that have been submitted to the site in ABC notation. A little bit of googling revealed that someone had already taken a stab at writing an ABC parser for arduino. I didn’t end up using that code, but it convinced me that what I was trying to do wasn’t crazy.

So I built a machine that plays Irish traditional music from the internet.

Playing with hardware and software, making things that go beep in the night.

The hardware has a piezo buzzer, an “on” button, an “off” button, a knob for controlling the speed of the tune, and an obligatory LED.

The software has a countdown timer that polls a URL every minute or so. The URL is http://tune.adactio.com/. That in turn uses The Session’s read-only API to grab the latest tune activity and then get the ABC notation for whichever tune is at the top of that list. Then it does some cleaning up—removing some of the more advanced ABC stuff—and outputs a single line of notes to be played. I’m fudging things a bit: the device has the range of a tin whistle, and expects tunes to be in the key of D or G, but seeing as that’s at least 90% of Irish traditional music, it’s good enough.

Whenever there’s a new tune, it plays it. Or you can hit the satisfying “on” button to manually play back the latest tune (and yes, you can hit the equally satisfying “off” button to stop it). Being able to adjust the playback speed with a twiddly knob turns out to be particularly handy if you decide to learn the tune.

I added one more lo-fi modification. I rolled up a piece of paper and placed it over the piezo buzzer to amplify the sound. It works surprisingly well. It’s loud!

Rolling my own speaker cone, quite literally.

I’ll keep tinkering with it. It’s fun. I realise I’m coming to this whole hardware-hacking thing very late, but I get it now: it really does feel similar to that feeling you would get when you first figured out how to make a web page back in the days of Geocities. I’ve built something that’s completely pointless for most people, but has special meaning for me. It’s ugly, and it’s inefficient, but it works. And that’s a great feeling.

(P.S. Seb will be running his workshop again on the 3rd and 4th of February, and there will a limited amount of early-bird tickets available for one hour, between 11am and midday this Thursday. I highly recommend you grab one.)

Habitasteroids

Science Hack Day San Francisco was held in the Github offices last weekend. It was brilliant!

Hacking begins Hacking Science hacker & grumpy cat enthusiast, Keri Bean Launch pad

This was the fifth Science Hack Day in San Francisco and the 40th worldwide. That’s truly incredible. I mean, I literally can’t believe it. When I organised the very first Science Hack Day back in 2010, I had no idea how far it would go. But Ariel has been indefatigable in making it a truly global event. She is amazing. And at this year’s San Francisco event, she outdid herself in putting together a fantastic cross-section of scientists, designers, and developers: paleontology, marine biology, geology, astronomy, particle physics, and many, many more disciplines were represented in the truly diverse attendees.

Saturday breakfast with the Science Hack Day community! The Science Hack Day girls! Stargazing on GitHub's roof Demos begin!

After an inspiring set of lightning talks on the first day, ideas started getting bounced around and the hacking began to take shape. I had a vague idea for—yet another—space-related hack. What clinched it was picking the brains of NASA’s Keri Bean. She’d help me get hold of the dataset I needed for my silly little hack.

So here’s the background…

There are many possibilities for human habitats in space: Stanford tori, O’Neill cylinders, Bernal spheres. Another idea, explored in science fiction, is hollowing out asteroids (Larry Niven’s bubbleworlds). Kim Stanley Robinson explores this idea in depth in his book 2312, where he describes the process of building an asteroid terrarium. The website of the book has a delightful walkthrough of the engineering processes involved. It’s not entirely implausible.

I wanted to make that idea approachable, so I thought about the kinds of people we might want to have living with us on the interior shell of a rotating hollowed-out asteroid. How about the people you follow on Twitter?

The only question that remains then is: which asteroid is the right one for you and your Twitter friends? Keri tracked down the motherlode of asteroid data and I started hacking the simplest of mashups—Twitter meets space rocks.

Here’s the result…

Habitasteroids!

Habitasteroids

Give it your Twitter username and it will tell you exactly which one of the asteroids in the main belt is right for you (I considered adding an enterprise option that would tell you where you could store your social network in the cloud …the Oort cloud, that is).

Be default, your asteroid will have the population density of Earth, which is quite generously. But if you want a more sparsely-populated habitat—say, the population density of Australia—or a more densely-populated world—with something like the population density of Japan—then you will be assigned a larger or smaller asteroid accordingly.

You’ll also be told by how much you should increase or decrease the rotation of the asteroid to get one gee of centrifugal force on the interior. Figuring out the equations for calculating centrifugal force almost broke me, but luckily I had help from a rocket scientist and a particle physicist …I’m not even kidding. And I should point out that the calculations take some liberties—I’m assuming a spherical body, which is quite a stretch, given the lumpy nature of most asteroids.

At 13:37 on the second day, the demos began. Keri and I were first up.

Jeremy wants to colonize an asteroid Habitasteroids

Give Habitasteroids a whirl for yourself. It’s a silly little thing, but I quite like how it turned out.

Speaking of silly things …at some point in the proceedings, Keri put the call out for asteroid data to her fellow space enthusiasts on Twitter. They responded with asteroid-related puns.

They have nice asteroids though: @brianwolven, @lukedones, @paix120, @LGalache, @motorbikematt, @brx0.

Oh, and while Habitasteroids might be a silly little hack, WRANGLER just might work.

WRANGLER: Capture and De-Spin of Asteroids and Space Debris

Indie Web Camp UK 2014

Indie Web Camp UK took place here in Brighton right after this year’s dConstruct. I was organising dConstruct. I was also organising Indie Web Camp. This was a problem.

It was a problem because I’m no good at multi-tasking, and I focused all my energy on dConstruct (it more or less dominated my time for the past few months). That meant that something had to give and that something was the organising of Indie Web Camp.

The event itself went perfectly smoothly. All the basics were there: a great venue, a solid internet connection, and a plan of action. But because I was so focused on dConstruct, I didn’t put any time into trying to get the word out about Indie Web Camp. Worse, I didn’t put any time into making sure that a diverse range of people knew about the event.

So in the end, Indie Web Camp UK 2014 was quite a homogenous gathering. That’s a real shame, and it’s my fault. My excuse is that I was busy with all things dConstruct, but that’s just that; an excuse. On the plus side, the effort I put into making dConstruct a diverse event paid off, but I’ll know better in future than to try to organise two back-to-back events. I need to learn to delegate and ask for help.

But I don’t want to cast Indie Web Camp in a totally negative light (I just want to acknowledge how it could have been better). It was actually pretty great. As with previous events, it was remarkably productive. The format of one day of talks, followed by one day of hacking is spot on.

Indie Web Camp UK attendees

I hadn’t planned to originally, but I spent the second day getting adactio.com switched over to https. Just a couple of weeks ago I wrote:

I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.

Well, I’m afraid that potential pain level has not dropped. In fact, I can confirm that get TLS working is massive pain in the behind. But on the first day of Indie Web Camp, Tim Retout led a session on security and offered up his expertise for day two. I took full advantage of his generous offer.

With Tim’s help, I was able to get adactio.com all set. If I hadn’t had his help, it probably would’ve taken me days …or I simply would’ve given up. I took plenty of notes so I could document the process. I’ll write it up soon, but alas, it will only be useful to people with the same kind of hosting set up as I have.

By the end of Indie Web Camp, thanks to Tim’s patient assistance, quite a few people has switched on TSL for their sites. The https page on the Indie Web Camp wiki is turning into quite a handy resource.

There was lots of progress in other areas too, particularly with webactions. Some of that progress relates to what I’ve been saying about Web Components. More on that later…

Throw in some Transmat action, location-based hacks, and communication tools; all-in-all a very productive weekend.

Hackfarming Tiny Planner

Towards the end of each year, we Clearlefties head off to a remote location in the countryside for a week of hacking on non-client work. It’s all good unclean fun.

It started two years ago when we made Map Tales. Then last year we worked on the Politmus project. A few months back, it was the turn of Hackfarm 2013.

Hackfarm 2013

This time it was bigger than ever. Rather than having everyone working on one big project all week, it made more sense to split into smaller teams and work on a few different smaller projects. Ant has written a detailed description of what went down.

By the middle of the week, I found myself on a team with James, other James, Graham, and an Andy. We started working on something that Boxman has wanted for a while now: a simple little app for adding steps to a list of things to do.

Here’s what differentiates it from the many other to-do list apps out there: you start by telling it what time you want to be finished by. Then, after you’ve added all your steps, it tells you what time you need to get started. An example use case would be preparing a Sunday roast. You know all the steps involved, and you know what time you want to sit down to eat, so what time do you need start your preparation?

We call it Tiny Planner. It’s not “done” in any meaningful sense of the word, and let’s face it, it probably never will be. What happens at hackdays, stays at hackdays …unfinished. Still, the code is public if anyone fancies doing something with it.

Hackfarm 2013 Hackfarm 2013

What made this project interesting from my perspective, was that it was one of them new-fangled single-page-app thingies. You know the kind: the ones that are made without progressive enhancement, and cease to exist in the absence of JavaScript. Exactly the kind of thing I would normally never work on, in other words.

It was …interesting. I though it would be a good opportunity to evaluate all the various JS-or-it-doesn’t-happen frameworks like Angular, Ember, and Backbone. So I started reading the documentation. I guess I hadn’t realised quite how stupid I am, because I couldn’t make any headway. It was quite dispiriting. So I left Graham to do all the hard JavaScript work and concentrated on the CSS instead. So much for investigating new technologies.

Hackfarm 2013

Partly because the internet connection at Hackfarm was so bad, we decided to reduce the server dependencies as much as possible. In the end, we didn’t need any server at all. All the data is stored in the browser in local storage. A handy side-effect of that is that we could offline everything—this may one of the few legitimate uses of appcache. Mind you, I never did get ‘round to actually adding the appcache component because, well, you know what it’s like with cache-invalidation and all that. (And like I said, the code’s public now so if it ever does get put into a presentable state, someone can add the offline stuff then.)

From a development perspective, it was an interesting experiment all ‘round; dabbling in client-side routing, client-side templating, client-side storage, client-side everything really. But it did feel …weird. There’s something uncanny about building something that doesn’t have proper URLs. It uses web technologies but it doesn’t really feel like it’s part of the web.

Anyway, feel free to play around with Tiny Planner, bearing in mind that it’s not a finished thing.

I should really put together a plan for finishing it. If only there were an app for that.

Hackfarm 2013

Radio Free Earth

Back at the first San Francisco Science Hack Day I wanted to do some kind of mashup involving the speed of light and the distance of stars:

I wanted to build a visualisation based on Matt’s brilliant light cone idea, but I found it far too daunting to try to find data in a usable format and come up with a way of drawing a customisable geocentric starmap of our corner of the galaxy. So I put that idea on the back burner��

At this year’s San Francisco Science Hack Day, I came back to that idea. I wanted some kind of mashup that demonstrated the connection between the time that light has travelled from distant stars, and the events that would have been happening on this planet at that moment. So, for example, a star would be labelled with “the battle of Hastings” or “the sack of Rome” or “Columbus’s voyage to America”. To do that, I’d need two datasets; the distance of stars, and the dates of historical events (leaving aside any Gregorian/Julian fuzziness).

For wont of a better hack, Chloe agreed to help me out. We set to work finding a good dataset of stellar objects. It turned out that a lot of the best datasets from NASA were either about our local solar neighbourhood, or else really distant galaxies and stars that are emitting prehistoric light.

The best dataset we could find was the Near Star Catalogue from Uranometria but the most distant star in that collection was only 70 or 80 light years away. That meant that we could only mash it up with historical events from the twentieth century. We figured we could maybe choose important scientific dates from the past 70 or 80 years, but to be honest, we really weren’t feeling it.

We had reached this impasse when it was time for the Science Hack Day planetarium show. It was terrific: we were treated to a panoramic tour of space, beginning with low Earth orbit and expanding all the way out to the cosmic microwave background radiation. At one point, the presenter outlined the reach of Earth’s radiosphere. That’s the distance that ionosphere-penetrating radio and television signals from Earth, travelling at the speed of light, have reached. “It extends about 70 light years out”, said the presenter.

This was perfect! That was exactly the dataset of stars that we had. It was a time for a pivot. Instead of the lofty goal of mapping historical events to the night sky, what if we tried to do something more trivial and fun? We could demonstrate how far classic television shows have travelled. Has Star Trek reached Altair? Is Sirius receiving I Love Lucy yet?

No, not TV shows …music! Now we were onto something. We would show how far the songs of planet Earth had travelled through space and which stars were currently receiving which hits.

Chloe remembered there being an API from Billboard, who have collected data on chart-topping songs since the 1940s. But that API appears to be gone, and the Echonest API doesn’t have chart dates. So instead, Chloe set to work screen-scraping Wikipedia for number one hits of the 40s, 50s, 60s, 70s …you get the picture. It was a lot of finding and replacing, but in the end we had a JSON file with every number one for the past 70 years.

Meanwhile, I was putting together the logic. Our list of stars had the distances in parsecs. So I needed to convert the date of a number one hit song into the number of parsecs that song had travelled, and then find the last star that it has passed.

We were tempted—for developer convenience—to just write all the logic in JavaScript, especially as our data was in JSON. But even though it was just a hack, I couldn’t bring myself to write something that relied on JavaScript to render the content. So I wrote some really crappy PHP instead.

By the end of the first day, the functionality was in place: you could enter a date, and find out what was number one on that date, and which star is just now receiving that song.

After the sleepover (more like a wakeover) in the aquarium, we started to style the interface. I say “we” …Chloe wrote the CSS while I made unhelpful remarks.

For the icing on the cake, Chloe used her previous experience with the Rdio API to add playback of short snippets of each song (when it’s available).

Here’s the (more or less) finished hack:

Radio Free Earth.

Basically, it’s a simple mashup of music and space …which is why I spent the whole time thinking “What would Matt do?”

Just keep hitting that button to hear a hit from planet Earth and see which lucky star is currently receiving the signal.*

Science!

*I know, I know: the inverse-square law means it’s practically impossible that the signal would be in any state to be received, but hey, it’s a hack.

Science Hack Day San Francisco

When I organised the first ever Science Hack Day in London in 2010, I made sure to write about how I organised the event. That’s because I wanted to encourage other people to organise their own Science Hack Days:

If I can do it, anyone can. And anyone should.

Later that year, Ariel organised a Science Hack Day in Palo Alto at the Institute For The Future. It was magnificent. Since then, Ariel has become a tireless champion and global instigator of Science Hack Day, spreading the idea, encouraging new events all over the world, and where possible, travelling to them. I just got the ball rolling—she has really run with it.

She organised another Science Hack Day in San Francisco for last weekend and I was lucky enough to attend—it coincided nicely with my travel plans to the States for An Event Apart in Austin. Once again, it was absolutely brilliant. There were tons of ingenious hacks, and the attendees were a wonderfully diverse bunch: some developers and designers, but also plenty of scientists and students, many (perhaps most) from out of town.

Hacking Hacking Hacking Lunch outdoors

But best of all was the venue: The California Academy of Sciences. It’s a fantastic museum, and after 5pm—when the public left—we had the place to ourselves. Penguins, crocodiles, a rainforest, an aquarium …it’s got it all. I didn’t get a chance to do all of the activities that were provided—I was too busy hacking or helping out—like stargazing on the roof, or getting a tour of the archives. But I did make it to the private planetarium show, which was wonderful.

Hacking Hacking

The Science Hackers spent the night, unrolling their sleeping bags in all the nooks and crannies of the aquarium and the African hall. It was like being a big kid. Mind you, the fun of sleeping over in such a great venue was somewhat tempered by the fact that trying to sleep in a sleeping bag on just a yoga mat on a hard floor is pretty uncomfortable. I was quite exhausted by day two of the event, but I powered through on the wave of infectious enthusiasm exhibited by all the attendees.

Sleeping over Sleeping over

Then when it came time to demo all the hacks …well, I was blown away. So much cool stuff.

Ariel and her team really outdid themselves. I’m so happy I was able to make it to the event. If you get the chance to attend a Science Hack Day, take it. And if there isn’t one happening near you, why not organise one? Ariel has put together a handy checklist to get you started so you can get excited and make things with science.

I’m still quite amazed that this was the 24th Science Hack Day! When I organised the first one three years ago, I had no idea that it could spread so far, but thanks to Ariel, it has become a truly special phenomenon.

Stargazing Planetarium

The ghost of browsers past

Even before a line of code was written for the line-mode browser simulator when we gathered together at CERN, there was a gleeful period of digital spelunking.

Brian goes browsing Demonstration data sources

We poked at the markup of the first ever website

  • What’s that NEXTID element? Turns it out it’s something specific to the NeXT operating system.
  • Why does the first iteration of HTML already contain H1 through to H6? It’s because they were lifted wholesale from a flavour of SGMLStandard Generalized Markup Language—that was already in use at CERN.

Oh, and Brian asked Robert Cailliau why they went with the term World Wide Web. “Well,” he said, “we had to call it something. And we thought we could always change it later.”

Then there was the story of the line-mode browser. It was created by Nicola Pellow, who was a student at CERN in 1990. She later worked on the Mac browser but her involvement with kickstarting the world wide web ended around 1993. She never showed up to any of the reunions.

We poked around in the (surprisingly short) source code of the line-mode browser. We found the lines that described how elements should be styled—the term “style sheet” appeared in a comment!

Proto-stylesheet Parsing the parser

If you’ve fired up the line-mode browser simulator and run some websites through it, you’ll probably see occasions where a whole bunch of JavaScript—nestled between script tags in the head of the document—gets rendered to the screen.

Clearleft

We could’ve hidden that JavaScript, but we made a deliberate decision to display it. That’s what the line-mode browser would have done. The script element didn’t exist back then. Heck, JavaScript didn’t exist back then. So browsers would have handled the unknown element in the standard HTML way: ignore the opening and closing tags and just render what’s in-between them. That’s still the error-handling model for unrecognised elements in HTML.

This is why we used to write our JavaScript like this:

<script language="JavaScript" type="text/javascript">
<!--
(JavaScript goes here)
//-->
</script>

The HTML comments stopped the JavaScript from being rendered to the screen in older browsers (like the line-mode browser). Using the opening HTML comment <!-- is functionally equivalent to // single-line comments in JavaScript …although you still need to prefix the closing --> comment with a //.

I remember doing this when I first started making websites in the 90s. You can see it if you view source on the first version of this website.

Later on, we all switched to XHTML so we updated the syntax to make it valid XML.

<script type="text/javascript">
//<![CDATA[
(JavaScript goes here)
//]]>
</script>

The <![CDATA[ part stops an XML parser from trying to parse the JavaScript. But HTML parsers would choke on that because it starts with an angle bracket. Hence the JavaScript-style // comment.

Anyway, we don’t bother with HTML or XHTML comments at the start of our script blocks anymore. And that’s why the line-mode browser simulator renders the JavaScript to the screen.

Note that the JavaScript isn’t executed. That’s thanks to a clever little hack by Remy: the line-mode browser simulator changes the type attribute of every script element to text/plain, effectively defusing them. Smart!