Journal tags: time

23

border:none 2023

In 2013, I spoke at the border:none event in Nuremberg. I gave a talk called The Power of Simplicity.

It was a great little event. Most of the talks were, like mine, on technical topics; design, development, the usual conference material.

This year Joschi and Marc decided to have another border:none event ten years on from the first one. They invited back all the original speakers, as well as some new folks. They kept the ticket price the same as ten years ago—just thirty euros.

For us speakers from the previous event, the only brief they gave us was to consider what’s happened in the past decade. I played it pretty safe and talked about the web. I’ll post a transcript of my talk soon.

Some of the other speakers were far more ambitious. They spoke about themselves, the world, the meaning of life …my presentation was very tame in comparison.

I really, really admire the honesty and vulnerability that those people displayed. Tobias Baldauf in particular took my breath away. He delivered an intensely personal talk on generational trauma that was meticulously researched and took incredible bravery to deliver. It was worth going to Nuremberg just for the privilege of being present for that talk.

Other talks were refreshingly tech-free. There was a talk on cold-water swimming. There was a talk on paragliding. And I don’t mean they were saying “what designers can learn from cold-water swimming” or “how I became a better developer through paragliding.” The talks were literally about swimming and paragliding.

There was a great variety of speakers this time around, include age ranges from puberty to menopause (quite literally—that was the topic of one of the talks). I had the great pleasure of providing some coaching before the event to fifteen year old Maya who was delivering her first talk in English. She did a fantastic job! And the talk she gave—about how teachers in her school aren’t always trusting of the technology they provide to students—was directly relevant to what we’re seeing in the world of work. Give people autonomy, agency and trust.

There was a lot of trust at border:none. Everyone who bought a ticket did so on trust—they had no idea what to expect. Likewise, Marc and Joschi put their trust in the speakers. They gave the speakers the freedom to talk about whatever they wanted. That trust was repayed.

Florian took some superb pictures of the event. Matthias wrote up his experience. So did Tom. Valisis shared the gist of his excellent talk.

At the end of the event there was some joking about returning in 2033. I love the idea of a conference that happens once every ten years. Count me in!

Relative times

Last week Phil posted a little update about his excellent site, ooh.directory:

If you’re in the habit of visiting the Recently Updated Blogs page, and leaving it open, the times when each blog was updated will now keep up with the relentless passing of time.

Does that make sense? “3 minutes ago” will change to “4 minutes ago” and so on and on and on, until you refresh the page.

I thought that was a nice little addition, and I immediately thought of The Session. There are time elements all over the site with relative times as the text content: 2 minutes ago, 7 hours ago, 1 year ago, and so on. Those strings of text are generated on the server. But I figured it would be nice enhancement to periodically update them in the browser after the page has loaded.

I viewed source to see how Phil was doing it. The code is nice and short, using a library called Day.js with a plug-in for relative time.

“Hang on”, I thought, “isn’t there some web standard for doing this kind of thing?” I had a vague memory of some JavaScript API for formatting dates and times.

Sure enough, we’ve now got the Intl.RelativeTimeFormat object. It’s got browser support across the board.

Here’s the code I wrote.

I’ve got a function that loops through all the time elements with datetime attributes. It compares the current timestamp to that value to get the elapsed time. Then that’s formatted using the format() method and output as innerText.

You need to tell the format() method which units you want to use: seconds, minutes, hours, days, etc. So there’s a little bit of looping to figure out which unit is most appropriate. If the elapsed time is less than a minute, use seconds. If the elapsed time is less than an hour, use minutes. If the elapsed time is less than a day, use hours. You get the idea.

It’s a pity there isn’t some kind of magic unit like “auto” to do this, but it’s not much extra code to figure it out.

Anyway, that function runs periodically using setInterval(). I’ve set it to run every 30 seconds in my gist. On The Session I’ve set it to one minute.

You’ll notice that I’m grabbing all the relevant time elements—using document.querySelector('time[datetime]')—every time the function is run. That may seem inefficient. Couldn’t I just grab them once and then keep them stored as an array? But I want this to work even if the page contents have been updated with Ajax. (Do people even say “Ajax” any more? Get off my lawn, you pesky kids!)

I think I’ve written this code in an abstract way so that you should be able to drop it into any web page. For the calculations to work, you’ll need to either make sure that your datetime attributes are using timezones. Or, if there’s no timezone info, UTC is assumed.

This was a fun little piece of functionality to play around with. Now I know a little more about this Intl.RelativeTimeFormat object. The way I’m using it as a classic example of progressive enhancement. If a browser doesn’t support it, or if my code breaks, it’s no big deal. The funtionality is a little bonus that almost nobody will notice anyway. Just a small delighter …if you’re the kind of person who finds it delightful when relative time strings automatically update.

Tragedy

There are two kinds of time-travel stories.

There are time-travel stories that explore the many-worlds hypothesis. Going back in time and making a change forks the universe. But the universe is constantly forking anyway. So effectively the time travel is a kind of universe-hopping (there’s a big crossover here with the alternative history subgenre).

The problem with multiverse stories is that there’s always a reset available. No matter how bad things get, there’s a parallel universe where everything is hunky dory.

The other kind of time travel story explores the idea of a block universe. There is one single timeline.

This is what you’ll find in Tenet, for example, or for a beautiful reduced test case, the Ted Chiang short story What’s Expected Of Us. That gets straight to the heart of the biggest implication of a block universe—the lack of free will.

There’s no changing what has happened or what will happen. In fact, the very act of trying to change the past often turns out to be the cause of what you’re trying to prevent in the present (like in Twelve Monkeys).

I’ve often referred to these single-timeline stories as being like Greek tragedies. But only recently—as I’ve been reading quite a bit of Greek mythology—have I realised that the reverse is also true:

Greek tragedies are time-travel stories.

Hear me out…

Time-travel stories aren’t actually about physically travelling in time. That’s just a convenience for the important part—information travelling in time. That’s at the heart of most time-travel stories; informaton from the future travelling back to the past.

William Gibson’s The Peripheral—very much a many-worlds story with its alternate universe “stubs”—takes this to its extreme. Nothing phyiscal ever travels in time. But in an age of telecommuting, nothing has to. Our time travellers are remote workers.

That book also highlights the power dynamics inherent in information wealth. Knowledge of the future gives you an advantage that you can exploit in the past. This is what Mark Twain’s Connecticut yankee does in King Arthur’s court.

This power dynamic is brilliantly inverted in Octavia Butler’s brilliant Kindred. No amount of information can help you if your place in society is determined by the colour of your skin.

Anyway, the point is that information flow is what matters in time-travel stories. Therefore any story where information travels backwards in time is a time-travel story.

That includes any story with a prophecy. A prophecy is information about the future, like:

Oedipus will kill his father and marry his mother.

You can try to change your fate, but you’ll just end up triggering it instead.

Greek tragedies are time-travel stories.

Spring

Spring is arriving. It’s just taking its time.

There are little signs. Buds on the trees. The first asparagus of the year. Daffodils. Changing the clocks. A stretch in the evenings. But the weather remains, for the most part, chilly and grim.

Reality is refusing to behave like a fast-forward montage leading up to to a single day when you throw open the curtains and springtime is suddenly there in all its glory.

That’s okay. I can wait. I’ve had a lot of practice over the past three years. We all have. Staying home, biding time, saving lives.

But hunkering down during The Situation isn’t like taking shelter during an air raid. There isn’t a signal that sounds to indicate “all clear!” It’s more like going from Winter to Spring. It’s slow, almost impercetible. But it is happening.

I’ve noticed a subtle change in my risk assessment over the past few months. I still think about COVID-19. I still factor it into my calculations. But it’s no longer the first thing I think of.

That’s a subtle change. It doesn’t seem like that long ago when COVID was at the forefront of my mind, especially if I was weighing up an excursion. Is it worth going to that restaurant? How badly do I want to go to that gig? Should I go to that conference?

Now I find myself thinking of COVID as less of a factor in my decision-making. It’s still there, but it has slowly slipped down the ranking.

I know that other people feel differently. For some people, COVID slipped out of their minds long ago. For others, it’s still very much front and centre. There isn’t a consensus on how to evaluate the risks. Like I said:

It’s like when you’re driving and you think that everyone going faster than you is a maniac, and everyone going slower than you is an idiot.

COVID-19 isn’t going away. But perhaps The Situation is.

The Situation has been gradually fading away. There isn’t a single moment where, from one day to the next, we can say “this marks the point where The Situation ended.” Even if there were, it would be a different moment for everyone.

As of today, the COVID-19 app officially stops working. Perhaps today is as good a day as any to say Spring has arrived. The season of rebirth.

Pace layers and design principles

I think it was Jason who once told me that if you want to make someone’s life a misery, teach them about typography. After that they’ll be doomed to notice all the terrible type choices and kerning out there in the world. They won’t be able to unsee it. It’s like trying to unsee the arrow in the FedEx logo.

I think that Stewart Brand’s pace layers model is a similar kind of mind virus, albeit milder. Once you’ve been exposed to it, you start seeing in it in all kinds of systems.

Each layer is functionally different from the others and operates somewhat independently, but each layer influences and responds to the layers closest to it in a way that makes the whole system resilient.

Last month I sent out an edition of the Clearleft newsletter that was all about pace layers. I gathered together examples of people who have been infected with the pace-layer mindworm who were applying the same layered thinking to other areas:

My own little mash-up is applying pace layers to the World Wide Web. Tom even brought it to life as an animation.

See the Pen Web Layers Of Pace by Tom (@webrocker) on CodePen.

Recently I had another flare-up of the pace-layer pattern-matching infection.

I was talking to some visiting Austrian students on the weekend about design principles. I explained my mild obsession with design principles stemming from the fact that they sit between “purpose” (or values) and “patterns” (the actual outputs):

Purpose » Principles » Patterns

Your purpose is “why?”

That then influences your principles, “how?”

Those principles inform your patterns, “what?”

Hey, wait a minute! If you put that list in reverse order it looks an awful lot like the pace-layers model with the slowest moving layer at the bottom and the fastest moving layer at the top. Perhaps there’s even room for an additional layer when patterns go into production:

  • Production
  • Patterns
  • Principles
  • Purpose

Your purpose should rarely—if ever—change. Your principles can change, but not too frequently. Your patterns need to change quite often. And what you’re actually putting out into production should be constantly updated.

As you travel from the most abstract layer—“purpose”—to the most concrete layer—“production”—the pace of change increases.

I can’t tell if I’m onto something here or if I’m just being apopheniac. Again.

Five decades

Phil turned 50 around the same time as I did. He took the opportunity to write some half-century notes. I thoroughly enjoyed reading them and it got me thinking about my own five decades of life.

0–10

A lot happened in the first few years. I was born in England but my family back moved to Ireland when I was three. Then my father died not long after that. I was young enough that I don’t really have any specific memories of that time. I have hazy impressionistic images of London in my mind but at this point I don’t know if they’re real or imagined.

10–20

Most of this time was spent being a youngster in Cobh, county Cork. All fairly uneventful. Being a teenage boy, I was probably a dickhead more than I realised at the time. It was also the 80s so there was a lot of shittiness happening in the background: The Troubles; Chernobyl; Reagan and Thatcher; the constant low-level expectation of nuclear annihilation. And most of the music was terrible—don’t let anyone tell you otherwise.

20–30

This was the period with the most new experiences. I started my twenties by dropping out of Art College in Cork and moving to Galway to be a full-time slacker. I hitch-hiked and busked around Europe. I lived in Canada for six months. Eventually I ended up in Freiburg in southern Germany where I met Jessica. The latter half of this decade was spent there, settling down a bit. I graduated from playing music on the street to selling bread in a bakery to eventually making websites. Before I turned 30, Jessica and I got married.

30–40

We move to Brighton! I continue to make websites and play music with Salter Cane. Half way through my thirties I co-found Clearleft with Andy and Rich. I also start writing books and speaking at conferences. I find that not only is this something I enjoy, but it’s something I’m actually good at. And it gives me the opportunity to travel and see more of the world.

40–50

It’s more of the same for the next ten years. More Clearleft, more writing, more speaking and travelling. Jessica and I got a mortgage on a flat at the start of the decade and exactly ten years later we’ve managed to pay it off, which feels good (I don’t like having any debt hanging over me).

That last decade certainly feels less eventful than, say, that middle decade but then, isn’t that the way with most lives? As Phil says:

If my thirties went by more quickly than my twenties, my forties just zipped by.

You’ve got the formative years in your 20s when you’re trying to figure yourself out so you’re constantly dabbling in a bit of everything (jobs, music, drugs, travel) and then things get straighter. So when it comes to memories, your brain can employ a more rigourous compression algorithm. Instead of storing each year separately, your memories are more like a single year times five or ten. And so it feels like time passes much quicker in later life than it did in those more formative experimental years.

But experimentation can be stressful too—“what if I never figure it out‽” Having more routine can be satisfying if you’re reasonably confident you’ve chosen a good path. I feel like I have (but then, so do most people).

Now it’s time for the next decade. In the short term, the outlook is for more of the same—that’s the outlook for everyone while the world is on pause for The Situation. But once that’s over, who knows? I intend to get back to travelling and seeing the world. That’s probably more to do with being stuck in one place for over a year than having mid-century itchy feet.

I don’t anticipate any sudden changes in lifestyle or career. If anything, I plan to double down on doing things I like and saying “no” to any activities I now know I don’t like. So my future will almost certainly involve more websites, more speaking, maybe more writing, and definitely more Irish traditional music.

I feel like having reached the milestone of 50, I should have at least a few well-earned pieces of advice to pass on. The kind of advice I wish I had received when I was younger. But I’ve racked my brains and this is all I’ve got:

Never eat an olive straight off the tree. You know this already but maybe part of your mind thinks “how bad can it be really?” Trust me. It’s disgusting.

Fifty

Today is my birthday. I am one twentieth of a millenium old. I am eighteen and a quarter kilo-days old. I am six hundred months old. I am somewhere in the order of 26.28 mega-minutes old. I am fifty years old.

The reflected light of the sun that left Earth when I was born has passed Alpha Cephei and will soon reach Delta Aquilae. In that time, our solar system has completed 0.00002% of its orbit around the centre of our galaxy.

I was born into a world with the Berlin Wall. That world ended when I turned eighteen.

Fifty years before I was born, the Irish war of independence was fought while the world was recovering from an influenza pandemic.

Fifty years after I was born, the UK is beginning its post-Brexit splintering while the world is in the middle of a coronavirus pandemic.

In the past few years, I started to speculate about what I might do for the big Five Oh. Should I travel somewhere nice? Or should I throw a big party and invite everyone I know?

Neither of those are options now. The decision has been made for me. I will have a birthday (and subsequent weekend) filled with the pleasures of home. I plan to over-indulge with all my favourite foods, lovingly prepared by Jessica. And I want the finest wines available to humanity—I want them here and I want them now.

I will also, inevitably, be contemplating the passage of time. I’m definitely of an age now where I’ve shifted from “explore” to “exploit.” In other words, I’ve pretty much figured out what I like doing. That is in contrast to the many years spent trying to figure out how I should be spending my time. Now my plans are more about maximising what I know I like and minimising everything else. What I like mostly involves Irish traditional music and good food.

So that’s what I’ll be doubling down on for my birthday weekend.

Ten down, one to go

The Long Now Foundation is dedicated to long-term thinking. I’ve been a member for quite a few years now …which, in the grand scheme of things, is not very long at all.

One of their projects is Long Bets. It sets out to tackle the problem that “there’s no tax on bullshit.” Here’s how it works: you make a prediction about something that will (or won’t happen) by a particular date. So far, so typical thought leadery. But then someone else can challenge your prediction. And here’s the crucial bit: you’ve both got to place your monies where your mouths are.

Ten years ago, I made a prediction on the Long Bets website. It’s kind of meta:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

I made the prediction on February 22nd, 2011 when my mind was preoccupied with digital preservation.

One year later I was on stage in Wellington, New Zealand, giving a talk called Of Time And The Network. I mentioned my prediction in the talk and said:

If anybody would like to take me up on that bet, you can put your money down.

Matt was also speaking at Webstock. When he gave his talk, he officially accepted my challenge.

So now it’s a bet. We both put $500 into the pot. If I win, the Bletchly Park Trust gets that money. If Matt wins, the money goes to The Internet Archive.

As I said in my original prediction:

I would love to be proven wrong.

That was ten years ago today. There’s just one more year to go until the pleasingly alliterative date of 2022-02-22 …or as the Long Now Foundation would write it, 02022-02-22 (gotta avoid that Y10K bug).

It is looking more and more likely that I will lose this bet. This pleases me.

The Web History podcast

From day one, I’ve been a fan of Jay Hoffman’s project The History Of The Web—both the newsletter and the evolving timeline.

Recently Jay started publishing essays on web history over on CSS Tricks:

  1. Birth
  2. Browsers
  3. The Website
  4. Search

Round about that time, Chris floated the idea of having people record themselves reading blog posts. I immediately volunteered my services for the web history essays.

So now you can listen to me reading Jay’s words:

  1. Birth
  2. Browsers
  3. The Website
  4. Search

Each chapter is round about half an hour long so that’s a solid two hours or so of me yapping.

Should you wish to take the audio with you wherever you go, I’ve made a podcast feed for you. Pop that in your podcatching software of choice. Here it is on Apple Podcasts. Here it is on Spotify.

And if you just can’t get enough of my voice, there’s always the Clearleft podcast …although that’s mostly other people talking, thank goodness.

T E N Ǝ T

Jessica and I went to cinema yesterday.

Normally this wouldn’t be a big deal, but in our current circumstances, it was something of a momentous decision that involved a lot of risk assessment and weighing of the odds. We’ve been out and about a few times, but always to outdoor locations: the beach, a park, or a pub’s beer garden. For the first time, we were evaluating whether or not to enter an indoor environment, which given what we now know about the transmission of COVID-19, is certainly riskier than being outdoors.

But this was a cinema, so in theory, nobody should be talking (or singing or shouting), and everyone would be wearing masks and keeping their distance. Time was also on our side. We were considering a Monday afternoon showing—definitely not primetime. Looking at the website for the (wonderful) Duke of York’s cinema, we could see which seats were already taken. Less than an hour before the start time for the film, there were just a handful of seats occupied. A cinema that can seat a triple-digit number of people was going to be seating a single digit number of viewers.

We got tickets for the front row. Personally, I love sitting in the front row, especially in the Duke of York’s where there’s still plenty of room between the front row and the screen. But I know that it’s generally considered an undesirable spot by most people. Sure enough, the closest people to us were many rows back. Everyone was wearing masks and we kept them on for the duration of the film.

The film was Tenet). We weren’t about to enter an enclosed space for just any ol’ film. It would have to be pretty special—a new Star Wars film, or Denis Villeneuve’s Dune …or a new Christopher Nolan film. We knew it would look good on the big screen. We also knew it was likely to be spoiled for us if we didn’t see it soon enough.

At this point I am sounding the spoiler horn. If you have not seen Tenet yet, abandon ship at this point.

I really enjoyed this film. I understand the criticism that has been levelled at it—too cold, too clinical, too confusing—but I still enjoyed it immensely. I do think you need to be able to enjoy feeling confused if this is going to be a pleasurable experience. The payoff is that there’s an equally enjoyable feeling when things start slotting into place.

The closest film in Christopher Nolan’s back catalogue to Tenet is Inception in terms of twistiness and what it asks of the audience. But in some ways, Tenet is like an inverted version of Inception. In Inception, the ideas and the plot are genuinely complex, but Nolan does a great job in making them understandable—quite a feat! In Tenet, the central conceit and even the overall plot is, in hindsight, relatively straightforward. But Nolan has made it seem more twisty and convuluted than it really is. The ten minute battle at the end, for example, is filled with hard-to-follow twists and turns, but in actuality, it literally doesn’t matter.

The pitch for the mood of this film is that it’s in the spy genre, in the same way that Inception is in the heist genre. Though there’s an argument to be made that Tenet is more of a heist movie than Inception. But in terms of tone, yeah, it’s going for James Bond.

Even at the very end of the credits, when the title of the film rolled into view, it reminded me of the Bond films that would tease “The end of (this film). But James Bond will return in (next film).” Wouldn’t it have been wonderful if the very end of Tenet’s credits finished with “The end of Tenet. But the protagonist will return in …Tenet.”

The pleasure I got from Tenet was not the same kind of pleasure I get from watching a Bond film, which is a simpler, more basic kind of enjoyment. The pleasure I got from Tenet was more like the kind of enjoyment I get from reading smart sci-fi, the kind that posits a “what if?” scenario and isn’t afraid to push your mind in all kinds of uncomfortable directions to contemplate the ramifications.

Like I said, the central conceit—objects or people travelling backwards through time (from our perspective)—isn’t actually all that complex, but the fun comes from all the compounding knock-on effects that build on that one premise.

In the film, and in interviews about the film, everyone is at pains to point out that this isn’t time travel. But that’s not true. In fact, I would argue that Tenet is one of the few examples of genuine time travel. What I mean is that most so-called time-travel stories are actually more like time teleportation. People jump from one place in time to another instaneously. There are only a few examples I can think of where people genuinely travel.

The grandaddy of all time travel stories, The Time Machine by H.G. Wells, is one example. There are vivid descriptions of the world outside the machine playing out in fast-forward. But even here, there’s an implication that from outside the machine, the world cannot perceive the time machine (which would, from that perspective, look slowed down to the point of seeming completely still).

The most internally-consistent time-travel story is Primer. I suspect that the Venn diagram of people who didn’t like Tenet and people who wouldn’t like Primer is a circle. Again, it’s a film where the enjoyment comes from feeling confused, but where your attention will be rewarded and your intelligence won’t be insulted.

In Primer, the protagonists literally travel in time. If you want to go five hours into the past, you have to spend five hours in the box (the time machine).

In Tenet, the time machine is a turnstile. If you want to travel five hours into the past, you need only enter the turnstile for a moment, but then you have to spend the next five hours travelling backwards (which, from your perspective, looks like being in a world where cause and effect are reversed). After five hours, you go in and out of a turnstile again, and voila!—you’ve time travelled five hours into the past.

Crucially, if you decide to travel five hours into the past, then you have always done so. And in the five hours prior to your decision, a version of you (apparently moving backwards) would be visible to the world. There is never a version of events where you aren’t travelling backwards in time. There is no “first loop”.

That brings us to the fundamental split in categories of time travel (or time jump) stories: many worlds vs. single timeline.

In a many-worlds story, the past can be changed. Well, technically, you spawn a different universe in which events unfold differently, but from your perspective, the effect would be as though you had altered the past.

The best example of the many-worlds category in recent years is William Gibson’s The Peripheral. It genuinely reinvents the genre of time travel. First of all, no thing travels through time. In The Peripheral only information can time travel. But given telepresence technology, that’s enough. The Peripheral is time travel for the remote worker (once again, William Gibson proves to be eerily prescient). But the moment that any information travels backwards in time, the timeline splits into a new “stub”. So the many-worlds nature of its reality is front and centre. But that doesn’t stop the characters engaging in classic time travel behaviour—using knowledge of the future to exert control over the past.

Time travel stories are always played with a stacked deck of information. The future has power over the past because of the asymmetric nature of information distribution—there’s more information in the future than in the past. Whether it’s through sports results, the stock market or technological expertise, the future can exploit the past.

Information is at the heart of the power games in Tenet too, but there’s a twist. The repeated mantra here is “ignorance is ammunition.” That flies in the face of most time travel stories where knowledge—information from the future—is vital to winning the game.

It turns out that information from the future is vital to winning the game in Tenet too, but the reason why ignorance is ammunition comes down to the fact that Tenet is not a many-worlds story. It is very much a single timeline.

Having a single timeline makes for time travel stories that are like Greek tragedies. You can try travelling into the past to change the present but in doing so you will instead cause the very thing you set out to prevent.

The meat’n’bones of a single timeline time travel story—and this is at the heart of Tenet—is the question of free will.

The most succint (and disturbing) single-timeline time-travel story that I’ve read is by Ted Chiang in his recent book Exhalation. It’s called What’s Expected Of Us. It was originally published as a single page in Nature magazine. In that single page is a distillation of the metaphysical crisis that even a limited amount of time travel would unleash in a single-timeline world…

There’s a box, the Predictor. It’s very basic, like Claude Shannon’s Ultimate Machine. It has a button and a light. The button activates the light. But this machine, like an inverted object in Tenet, is moving through time differently to us. In this case, it’s very specific and localised. The machine is just a few seconds in the future relative to us. Cause and effect seem to be reversed. With a normal machine, you press the button and then the light flashes. But with the predictor, the light flashes and then you press the button. You can try to fool it but you won’t succeed. If the light flashes, you will press the button no matter how much you tell yourself that you won’t (likewise if you try to press the button before the light flashes, you won’t succeed). That’s it. In one succinct experiment with time, it is demonstrated that free will doesn’t exist.

Tenet has a similarly simple object to explain inversion. It’s a bullet. In an exposition scene we’re shown how it travels backwards in time. The protagonist holds his hand above the bullet, expecting it to jump into his hand as has just been demonstrated to him. He is told “you have to drop it.” He makes the decision to “drop” the bullet …and the bullet flies up into his hand.

This is a brilliant bit of sleight of hand (if you’ll excuse the choice of words) on Nolan’s part. It seems to imply that free will really matters. Only by deciding to “drop” the bullet does the bullet then fly upward. But here’s the thing: the protagonist had no choice but to decide to drop the bullet. We know that he had no choice because the bullet flew up into his hand. The bullet was always going to fly up into his hand. There is no timeline where the bullet doesn’t fly up into his hand, which means there is no timeline where the protagonist doesn’t decide to “drop” the bullet. The decision is real, but it is inevitable.

The lesson in this scene is the exact opposite of what it appears. It appears to show that agency and decision-making matter. The opposite is true. Free will cannot, in any meaningful sense, exist in this world.

This means that there was never really any threat. People from the future cannot change the past (or wipe it out) because it would’ve happened already. At one point, the protagonist voices this conjecture. “Doesn’t the fact that we’re here now mean that they don’t succeed?” Neil deflects the question, not because of uncertainty (we realise later) but because of certainty. It’s absolutely true that the people in the future can’t succeed because they haven’t succeeded. But the protagonist—at this point in the story—isn’t ready to truly internalise this. He needs to still believe that he is acting with free will. As that Ted Chiang story puts it:

It’s essential that you behave as if your decisions matter, even though you know that they don’t.

That’s true for the audience watching the film. If we were to understand too early that everything will work out fine, then there would be no tension in the film.

As ever with Nolan’s films, they are themselves metaphors for films. The first time you watch Tenet, ignorance is your ammuntion. You believe there is a threat. By the end of the film you have more information. Now if you re-watch the film, you will experience it differently, armed with your prior knowledge. But the film itself hasn’t changed. It’s the same linear flow of sequential scenes being projected. Everything plays out exactly the same. It’s you who have been changed. The first time you watch the film, you are like the protagonist at the start of the movie. The second time you watch it, you are like the protagonist at the end of the movie. You see the bigger picture. You understand the inevitability.

The character of Neil has had more time to come to terms with a universe without free will. What the protagonist begins to understand at the end of the film is what Neil has known for a while. He has seen this film. He knows how it ends. It ends with his death. He knows that it must end that way. At the end of the film we see him go to meet his death. Does he make the decision to do this? Yes …but he was always going to make the decision to do this. Just as the protagonist was always going to decide to “drop” the bullet, Neil was always going to decide to go to his death. It looks like a choice. But Neil understands at this point that the choice is pre-ordained. He will go to his death because he has gone to his death.

At the end, the protagonist—and the audience—understands. Everything played out exactly as it had to. The people in the future were hoping that reality allowed for many worlds, where the past could be changed. Luckily for us, reality turns out to be a single timeline. But the price we pay is that we come to understand, truly understand, that we have no free will. This is the kind of knowledge we wish we didn’t have. Ignorance was our ammunition and by the end of the film, it is spent.

Nolan has one other piece of misdirection up his sleeve. He implies that the central question at the heart of this time-travel story is the grandfather paradox. Our descendents in the future are literally trying to kill their grandparents (us). But if they succeed, then they can never come into existence.

But that’s not the paradox that plays out in Tenet. The central paradox is the bootstrap paradox, named for the Heinlein short story, By His Bootstraps. Information in this film is transmitted forwards and backwards through time, without ever being created. Take the phrase “Tenet”. In subjective time, the protagonist first hears of this phrase—and this organisation—when he is at the start of his journey. But the people who tell him this received the information via a subjectively older version of the protagonist who has travelled to the past. The protagonist starts the Tenet organistion (and phrase) in the future because the organisation (and phrase) existed in the past. So where did the phrase come from?

This paradox—the bootstrap paradox—remains after the grandfather paradox has been dealt with. The grandfather paradox was a distraction. The bootstrap paradox can’t be resolved, no matter how many times you watch the same film.

So Tenet has three instances of misdirection in its narrative:

  • Inversion isn’t time travel (it absolutely is).
  • Decisions matter (they don’t; there is no free will).
  • The grandfather paradox is the central question (it’s not; the bootstrap paradox is the central question).

I’m looking forward to seeing Tenet again. Though it can never be the same as that first time. Ignorance can never again be my ammunition.

I’m very glad that Jessica and I decided to go to the cinema to see Tenet. But who am I kidding? Did we ever really have a choice?

Trad time

Fifteen years ago, I went to the Willie Clancy Summer School in Miltown Malbay:

I’m back from the west of Ireland. I was sorry to leave. I had a wonderful, music-filled time.

I’m not sure why it took me a decade and a half to go back, but that’s what I did last week. Myself and Jessica once again immersed ourselves in Irish tradtional music. I’ve written up a trip report over on The Session.

On the face of it, fifteen years is a long time. Last time I made the trip to county Clare, I was taking pictures on a point-and-shoot camera. I had a phone with me, but it had a T9 keyboard that I could use for texting and not much else. Also, my hair wasn’t grey.

But in some ways, fifteen years feels like the blink of an eye.

I spent my mornings at the Willie Clancy Summer School immersed in the history of Irish traditional music, with Paddy Glackin as a guide. We were discussing tradition and change in generational timescales. There was plenty of talk about technology, but we were as likely to discuss the influence of the phonograph as the influence of the internet.

Outside of the classes, there was a real feeling of lengthy timescales too. On any given day, I would find myself listening to pre-teen musicians at one point, and septegenarian masters at another.

Now that I’m back in the Clearleft studio, I’m finding it weird to adjust back in to the shorter timescales of working on the web. Progress is measured in weeks and months. Technologies are deemed outdated after just a year or two.

The one bridging point I have between these two worlds is The Session. It’s been going in one form or another for over twenty years. And while it’s very much on and of the web, it also taps into a longer tradition. Over time it has become an enormous repository of tunes, for which I feel a great sense of responsibility …but in a good way. It’s not something I take lightly. It’s also something that gives me great satisfaction, in a way that’s hard to achieve in the rapidly moving world of the web. It’s somewhat comparable to the feelings I have for my own website, where I’ve been writing for eighteen years. But whereas adactio.com is very much focused on me, thesession.org is much more of a community endeavour.

I question sometimes whether The Session is helping or hindering the Irish music tradition. “It all helps”, Paddy Glackin told me. And I have to admit, it was very gratifying to meet other musicians during Willie Clancy week who told me how much the site benefits them.

I think I benefit from The Session more than anyone though. It keeps me grounded. It gives me a perspective that I don’t think I’d otherwise get. And in a time when it feels entirely to right to question whether the internet is even providing a net gain to our world, I take comfort in being part of a project that I think uses the very best attributes of the World Wide Web.

Timing out

Service workers are great for creating a good user experience when someone is offline. Heck, the book I wrote about service workers is literally called Going Offline.

But in some ways, the offline experience is relatively easy to handle. It’s a binary situation; either you’re online or you’re offline. What’s more challenging—and probably more common—is the situation that Jake calls Lie-Fi. That’s when technically you’ve got a network connection …but it’s a shitty connection, like one bar of mobile signal. In that situation, because there’s technically a connection, the user gets a slow frustrating experience. Whatever code you’ve got in your service worker for handling offline situations will never get triggered. When you’re handling fetch events inside a service worker, there’s no automatic time-out.

But you can make one.

That’s what I’ve done recently here on adactio.com. Before showing you what I added to my service worker script to make that happen, let me walk you through my existing strategy for handling offline situations.

Service worker strategies

Alright, so in my service worker script, I’ve got a block of code for handling requests from fetch events:

addEventListener('fetch', fetchEvent => {
        const request = fetchEvent.request;
    // Do something with this request.
});

I’ve got two strategies in my code. One is for dealing with requests for pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
}

By adding an else clause I can have a different strategy for dealing with requests for anything else—images, style sheets, scripts, and so on:

if (request.headers.get('Accept').includes('text/html')) {
    // Code for handling page requests.
} else {
    // Code for handling everthing else.
}

For page requests, I’m going to try to go the network first:

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        return responseFromFetch;
    })

My logic is:

When someone requests a page, try to fetch it from the network.

If that doesn’t work, we’re in an offline situation. That triggers the catch clause. That’s where I have my offline strategy: show a custom offline page that I’ve previously cached (during the install event):

.catch( fetchError => {
    return caches.match('/offline');
})

Now my logic has been expanded to this:

When someone requests a page, try to fetch it from the network, but if that doesn’t work, show a custom offline page instead.

So my overall code for dealing with requests for pages looks like this:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
            return responseFromFetch;
        })
        .catch( fetchError => {
            return caches.match('/offline');
        })
    );
}

Now I can fill in the else statement that handles everything else—images, style sheets, scripts, and so on. Here my strategy is different. I’m looking in my caches first, and I only fetch the file from network if the file can’t be found in any cache:

caches.match(request)
.then( responseFromCache => {
    return responseFromCache || fetch(request);
})

Here’s all that fetch-handling code put together:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

Good.

Cache as you go

Now I want to introduce an extra step in the part of the code where I deal with requests for pages. Whenever I fetch a page from the network, I’m going to take the opportunity to squirrel it away in a cache. I’m calling that cache “pages”. I’m imaginative like that.

fetchEvent.respondWith(
    fetch(request)
    .then( responseFromFetch => {
        const copy = responseFromFetch.clone();
        try {
            fetchEvent.waitUntil(
                caches.open('pages')
                .then( pagesCache => {
                    return pagesCache.put(request, copy);
                })
            )
        } catch(error) {
            console.error(error);
        }
        return responseFromFetch;
    })

You’ll notice that I can’t put the response itself (responseFromCache) into the cache. That’s a stream that I only get to use once. Instead I need to make a copy:

const copy = responseFromFetch.clone();

That’s what gets put in the pages cache:

fetchEvent.waitUntil(
    caches.open('pages')
    .then( pagesCache => {
        return pagesCache.put(request, copy);
    })
)

Now my logic for page requests has an extra piece to it:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, show a custom offline page instead.

Here’s my updated fetch-handling code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            fetch(request)
            .then( responseFromFetch => {
                const copy = responseFromFetch.clone();
                try {
                    fetchEvent.waitUntil(
                        caches.open('pages')
                        .then( pagesCache => {
                            return pagesCache.put(request, copy);
                        })
                    )
                } catch(error) {
                    console.error(error);
                }
                return responseFromFetch;
            })
            .catch( fetchError => {
                return caches.match('/offline');
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

I call this the cache-as-you-go pattern. The more pages someone views on my site, the more pages they’ll have cached.

Now that there’s an ever-growing cache of previously visited pages, I can update my offline fallback. Currently, I reach straight for the custom offline page:

.catch( fetchError => {
    return caches.match('/offline');
})

But now I can try looking for a cached copy of the requested page first:

.catch( fetchError => {
    caches.match(request)
    .then( responseFromCache => {
        return responseFromCache || caches.match('/offline');
    })
});

Now my offline logic is expanded:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead.

I can also access this ever-growing cache of pages from my custom offline page to show people which pages they can revisit, even if there’s no internet connection.

So far, so good. Everything I’ve outlined so far is a good robust strategy for handling offline situations. Now I’m going to deal with the lie-fi situation, and it’s that cache-as-you-go strategy that sets me up nicely.

Timing out

I want to throw this addition into my logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

The first thing I’m going to do is rewrite my code a bit. If the fetch event is for a page, I’m going to respond with a promise:

if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
        new Promise( resolveWithResponse => {
            // Code for handling page requests.
        })
    );
}

Promises are kind of weird things to get your head around. They’re tailor-made for doing things asynchronously. You can set up two parameters; a success condition and a failure condition. If the success condition is executed, then we say the promise has resolved. If the failure condition is executed, then the promise rejects.

In my re-written code, I’m calling the success condition resolveWithResponse (and I haven’t bothered with a failure condition, tsk, tsk). I’m going to use resolveWithResponse in my promise everywhere that I used to have a return statement:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                fetch(request)
                .then( responseFromFetch => {
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request);
        })
    }
});

By itself, rewriting my code as a promise doesn’t change anything. Everything’s working the same as it did before. But now I can introduce the time-out logic. I’m going to put this inside my promise:

const timer = setTimeout( () => {
    caches.match(request)
    .then( responseFromCache => {
        if (responseFromCache) {
            resolveWithResponse(responseFromCache);
        }
    })
}, 3000);

If a request takes three seconds (3000 milliseconds), then that code will execute. At that point, the promise attempts to resolve with a response from the cache instead of waiting for the network. If there is a cached response, that’s what the user now gets. If there isn’t, then the wait continues for the network.

The last thing left for me to do is cancel the countdown to timing out if a network response does return within three seconds. So I put this in the then clause that’s triggered by a successful network response:

clearTimeout(timer);

I also add the clearTimeout statement to the catch clause that handles offline situations. Here’s the final code:

addEventListener('fetch', fetchEvent => {
    const request = fetchEvent.request;
    if (request.headers.get('Accept').includes('text/html')) {
        fetchEvent.respondWith(
            new Promise( resolveWithResponse => {
                const timer = setTimeout( () => {
                    caches.match(request)
                    .then( responseFromCache => {
                        if (responseFromCache) {
                            resolveWithResponse(responseFromCache);
                        }
                    })
                }, 3000);
                fetch(request)
                .then( responseFromFetch => {
                    clearTimeout(timer);
                    const copy = responseFromFetch.clone();
                    try {
                        fetchEvent.waitUntil(
                            caches.open('pages')
                            then( pagesCache => {
                                return pagesCache.put(request, copy);
                            })
                        )
                    } catch(error) {
                        console.error(error);
                    }
                    resolveWithResponse(responseFromFetch);
                })
                .catch( fetchError => {
                    clearTimeout(timer);
                    caches.match(request)
                    .then( responseFromCache => {
                        resolveWithResponse(
                            responseFromCache || caches.match('/offline')
                        );
                    })
                })
            })
        );
    } else {
        caches.match(request)
        .then( responseFromCache => {
            return responseFromCache || fetch(request)
        })
    }
});

That’s the JavaScript translation of this logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

For everything else, try finding a cached version first, otherwise fetch it from the network.

Pros and cons

As with all service worker enhancements to a website, this strategy will do absolutely nothing for first-time visitors. If you’ve never visited my site before, you’ve got nothing cached. But the more you return to the site, the more your cache is primed for speedy retrieval.

I think that serving up a cached copy of a page when the network connection is flaky is a pretty good strategy …most of the time. If we’re talking about a blog post on this site, then sure, there won’t be much that the reader is missing out on—a fixed typo or ten; maybe some additional webmentions at the end of a post. But if we’re talking about the home page, then a reader with a flaky network connection might think there’s nothing new to read when they’re served up a stale version.

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.” I’d also need some way in the service worker to identify any requests originating from that interface element and make sure they always go out to the network.

I think that should be doable somehow. If you can think of a way to do it, please share it. Write a blog post and send me the link.

But even without the option to over-ride the time-out, I’m glad that I’m at least doing something to handle the lie-fi situation. Perhaps I should write a sequel to Going Offline called Still Online But Only In Theory Because The Connection Sucks.

Timelines of the web

Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.

Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A tags.

So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.

The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.

Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.

The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.

So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.

I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.

Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I marked up each timeline as an ordered list of h-events:

<li class="h-event y1968">
  <a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
    <time class="dt-start" datetime="1968-12-09">1968</time>
    <abbr class="p-name" title="oN-Line System">NLS</abbr>
  </a>
</li>

With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.

The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc(). I wanted to use translateX() but I couldn’t get the maths to work for that, so I had use plain ol’ left and right:

.y1968 {
  left: calc((1968 - 1959) * (100%/30) - 5em);
}

For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:

.y2014 {
  right: calc((2019 - 2014) * (100%/30) - 5em);
}

(Each h-event has a width of 5em so that’s where the extra bit at the end comes from.)

I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.

As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.

Et voilà!

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:

1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.

1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.

1982: Literary Machines, Ted Nelson’s book which was on our desk at all times

I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.

The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.

Hello WorldWideWeb.

Notifications

I’ve written before about how I use apps on my phone:

If I install an app on my phone, the first thing I do is switch off all notifications. That saves battery life and sanity.

The only time my phone is allowed to ask for my attention is for phone calls, SMS, or FaceTime (all rare occurrences). I initiate every other interaction—Twitter, Instagram, Foursquare, the web. My phone is a tool that I control, not the other way around.

To me, this seems like a perfectly sensible thing to do. I was surprised by how others thought it was radical and extreme.

I’m always shocked when I’m out and about with someone who has their phone set up to notify them of any activity—a mention on Twitter, a comment on Instagram, or worst of all, an email. The thought of receiving a notification upon receipt of an email gives me the shivers. Allowing those kinds of notifications would feel like putting shackles on my time and attention. Instead, I think I’m applying an old-school RSS mindset to app usage: pull rather than push.

Don’t get me wrong: I use apps on my phone all the time: Twitter, Instagram, Swarm (though not email, except in direst emergency). Even without enabling notifications, I still have to fight the urge to fiddle with my phone—to check to see if anything interesting is happening. I’d like to think I’m in control of my phone usage, but I’m not sure that’s entirely true. But I do know that my behaviour would be a lot, lot worse if notifications were enabled.

I was a bit horrified when Apple decided to port this notification model to the desktop. There doesn’t seem to be any way of removing the “notification tray” altogether, but I can at least go into System Preferences and make sure that absolutely nothing is allowed to pop up an alert while I’m trying to accomplish some other task.

It’s the same on iOS—you can control notifications from Settings—but there’s an added layer within the apps themselves. If you have notifications disabled, the apps encourage you to enable them. That’s fine …at first. Being told that I could and should enable notifications is a perfectly reasonable part of the onboarding process. But with some apps I’m told that I should enable notifications Every. Single. Time.

Instagram Swarm

Of the apps I use, Instagram and Swarm are the worst offenders (I don’t have Facebook or Snapchat installed so I don’t know whether they’re as pushy). This behaviour seems to have worsened recently. The needling has been dialed up in recent updates to the apps. It doesn’t matter how often I dismiss the dialogue, it reappears the next time I open the app.

Initially I thought this might be a bug. I’ve submitted bug reports to Instagram and Swarm, but I’m starting to think that they see my bug as their feature.

In the grand scheme of things, it’s not a big deal, but I would appreciate some respect for my deliberate choice. It gets pretty wearying over the long haul. To use a completely inappropriate analogy, it’s like a recovering alcoholic constantly having to rebuff “friends” asking if they’re absolutely sure they don’t want a drink.

I don’t think there’s malice at work here. I think it’s just that I’m an edge-case scenario. They’ve thought about the situation where someone doesn’t have notifications enabled, and they’ve come up with a reasonable solution: encourage that person to enable notifications. After all, who wouldn’t want notifications? That question, if it’s asked at all, is only asked rhetorically.

I’m trying to do the healthy thing here (or at least the healthier thing) in being mindful of my app usage. They sure aren’t making it easy.

The model that web browsers use for notifications seems quite sensible in comparison. If you arrive on a site that asks for permission to send you notifications (without even taking you out to dinner first) then you have three options: allow, block, or dismiss. If you choose “block”, that site will never be able to ask that browser for permission to enable notifications. Ever. (Oh, how I wish I could apply that browser functionality to all those sites asking me to sign up for their newsletter!)

That must seem like the stuff of nightmares for growth-hacking disruptive startups looking to make their graphs go up and to the right, but it’s a wonderful example of truly user-centred design. In that situation, the browser truly feels like a user agent.

Month maps

One of the topics I enjoy discussing at Indie Web Camps is how we can use design to display activity over time on personal websites. That’s how I ended up with sparklines on my site—it was the a direct result of a discussion at Indie Web Camp Nuremberg a year ago:

During the discussion at Indie Web Camp, we started looking at how silos design their profile pages to see what we could learn from them. Looking at my Twitter profile, my Instagram profile, my Untappd profile, or just about any other profile, it’s a mixture of bio and stream, with the addition of stats showing activity on the site—signs of life.

Perhaps the most interesting visual example of my activity over time is on my Github profile. Halfway down the page there’s a calendar heatmap that uses colour to indicate the amount of activity. What I find interesting is that it’s using two axes of time over a year: days of the month across the X axis and days of the week down the Y axis.

I wanted to try something similar, but showing activity by time of day down the Y axis. A month of activity feels like the right range to display, so I set about adding a calendar heatmap to monthly archives. I already had the data I needed—timestamps of posts. That’s what I was already using to display sparklines. I wrote some code to loop over those timestamps and organise them by day and by hour. Then I spit out a table with days for the columns and clumps of hours for the rows.

Calendar heatmap on Dribbble

I’m using colour (well, different shades of grey) to indicate the relative amounts of activity, but I decided to use size as well. So it’s also a bubble chart.

It doesn’t work very elegantly on small screens: the table is clipped horizontally and can be swiped left and right. Ideally the visualisation itself would change to accommodate smaller screens.

Still, I kind of like the end result. Here’s last month’s activity on my site. Here’s the same time period ten years ago. I’ve also added month heatmaps to the monthly archives for my journal, links, and notes. They’re kind of like an expanded view of the sparklines that are shown with each month.

From one year ago, here’s the daily distribution of

And then here’s the the daily distribution of everything in that month all together.

I realise that the data being displayed is probably only of interest to me, but then, that’s one of the perks of having your own website—you can do whatever you feel like.

Long betting

It has been exactly six years to the day since I instantiated this prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

It is exactly five years to the day until the prediction condition resolves to a Boolean true or false.

If it resolves to true, The Bletchly Park Trust will receive $1000.

If it resolves to false, The Internet Archive will receive $1000.

Much as I would like Bletchley Park to get the cash, I’m hoping to lose this bet. I don’t want my pessimism about URL longevity to be rewarded.

So, to recap, the bet was placed on

02011-02-22

It is currently

02017-02-22

And the bet times out on

02022-02-22.

dConstruct 2015 podcast: Ingrid Burrington

The dConstruct podcast episodes are coming thick and fast. Hot on the heels of the inaugural episode with Matt Novak and the sophomore episode with Josh Clark comes the third in the series: the one with Ingrid Burrington.

This was a fun meeting of minds. We geeked out about the physical infrastructure of the internet and time-travel narratives, from The Terminator to The Peripheral. During the episode, I sounded the spoiler warning in case you haven’t read that book, but we didn’t actually end up giving anything away.

I really enjoyed this chat with Ingrid. I hope you’ll enjoy listening to it.

Oh, and now you can subscribe to the dConstruct 2015 podcast directly from iTunes.

And remember, as a podcast listener, you get 10% off the ticket price for dConstruct using the discount code “ansible.”

100 words 005

I enjoy a good time travel yarn. Two of the most enjoyable temporal tales of recent years have been Rian Johnson’s film Looper and William Gibson’s book The Peripheral.

Mind you, the internal time travel rules of Looper are all over the place, whereas The Peripheral is wonderfully consistent.

Both share an interesting commonality in their settings. They are set in the future and …the future: two different time periods but neither of them are the present. Both works also share the premise that the more technologically advanced future would inevitably exploit the time period further down the light cone.

Wibbly-wobbly timey-wimey stuff

I met up with Remy a few months back to try to help him finalise the line-up for this year’s Full Frontal conference. Remy puts a lot of thought into crafting a really solid line-up. He was in a good position too: the conference was already sold out so he didn’t have to worry about having a big-name speaker to put bums on seats—he could concentrate entirely on finding just the right speaker for the final talk.

He described the kind of “big picture” talk he was looking for, and I started naming some names and giving him some ideas of people to contact.

Imagine my surprise then, when—while we were both in New York for Brooklyn Beta—I received a lengthy email from Remy (pecked out on his phone), saying that he had decided who wanted to do the closing talk at Full Frontal. He wanted me to do it.

Now, this was just a couple of weeks ago so my first thought was “No way! I don’t have enough time to prepare a talk.” It takes me quite a while to prepare a new presentation.

But then he described—in quite some detail—what he wanted me to talk about …and it’s exactly the kind of stuff that I really enjoy geeking out about: long-term thinking, digital preservation, and all that jazz. So I said yes.

That’s why I’ve spent the last couple of weeks quietly freaking out, attempting to marshall my thoughts and squeeze them into Keynote. The title of my talk is Time. Pretentious? Moi?

I’m trying to pack a lot into this presentation. I’ve already had to kill some of my darlings and drop some of the more esoteric stuff, but damn it, it’s hard to still squeeze everything in.

I’ve been immersed in research and link-making, reading and huffduffing all things time-related. In the course of my hypertravels, I discovered that there’s an entire event devoted to “the origins, evolution, and future of public time.” It’s called Time For Everyone and it’s taking place in California …at exactly the same time as Full Frontal.

Here’s the funny thing: the description for the event is exactly the same as the description I gave Remy for my talk:

This thing all things devours:
Birds, beasts, trees, flowers;
Gnaws iron, bites steel;
Grinds hard stones to meal;
Slays king, ruins town,
And beats high mountain down.

If you’re coming along to Full Frontal next Friday, I hope you’ll be in a receptive mood. I also hope that Remy won’t mind that what I’m going to present isn’t exactly what he asked for …but I think it’s interesting stuff.

I just wish I had more time.

Long time

A few years back, I was on a road trip in the States with my friend Dan. We drove through Maryland and Virginia to the sites of American Civil War battles—Gettysburg, Antietam. I was reading Tom Standage’s magnificent book The Victorian Internet at the time. When I was done with the book, I passed it on to Dan. He loved it. A few years later, he sent me a gift: a glass telegraph insulator.

Glass telegraph insulator from New York

Last week I received another gift from Dan: a telegraph key.

Telegraph key

It’s lovely. If my knowledge of basic electronics were better, I’d hook it up to an Arduino and tweet with it.

Dan came over to the UK for a visit last month. We had a lovely time wandering around Brighton and London together. At one point, we popped into the National Portrait Gallery. There was one painting he really wanted to see: the portrait of Samuel Pepys.

Pepys

“Were you reading the online Pepys diary?”, I asked.

“Oh, yes!”, he said.

“I know the guy who did that!”

The “guy who did that” is, of course, the brilliant Phil Gyford.

Phil came down to Brighton and gave a Skillswap talk all about the ten-year long project.

The diary of Samuel Pepys: Telling a complex story online on Huffduffer

Now Phil has restarted the diary. He wrote a really great piece about what it’s like overhauling a site that has been online for a decade. Given that I spent a lot of my time last year overhauling The Session (which has been online in some form or another since the late nineties), I can relate to his perspective on trying to choose long-term technologies:

Looking ahead, how will I feel about this Django backend in ten years’ time? I’ve no idea what the state of the platform will be in a decade.

I was thinking about switching The Session over to Django, but I decided against it in the end. I figured that the pain involved in trying to retrofit an existing site (as opposed to starting a brand new project) would be too much. So the site is still written in the very uncool LAMP stack: Linux, Apache, MySQL, and PHP.

Mind you, Marco Arment makes the point in his Webstock talk that there’s a real value to using tried and tested “boring” technologies.

One area where I’ve found myself becoming increasingly wary over time is the use of third-party APIs. I say that with a heavy heart—back at dConstruct 2006 I was talking all about The Joy of API. But Yahoo, Google, Twitter …they’ve all deprecated or backtracked on their offerings to developers.

Anyway, this is something that has been on my mind a lot lately: evaluating technologies and services in terms of their long-term benefit instead of just their short-term hit. It’s something that we need to think about more as developers, and it’s certainly something that we need to think about more as users.

Compared with genuinely long-term projects like the 10,000 year Clock of the Long Now making something long-lasting on the web shouldn’t be all that challenging. The real challenge is acknowledging that this is even an issue. As Phil puts it:

I don’t know how much individuals and companies habitually think about this. Is it possible to plan for how your online service will work over the next ten years, never mind longer?

As my Long Bet illustrates, I can be somewhat pessimistic about the longevity of our web creations:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

But I really hope I lose that bet. Maybe I’ll suggest to Matt (my challenger on the bet) that we meet up on February 22nd, 2022 at the Long Now Salon. It doesn’t exist yet. But give it time.