Journal tags: arc

89

Directory enquiries

I was having a discussion with some of my peers a little while back. We were collectively commenting on the state of education and documentation for front-end development.

A lot of the old stalwarts have fallen by the wayside of late. CSS Tricks hasn’t been the same since it got bought out by Digital Ocean. A List Apart goes through fallow periods. Even the Mozilla Developer Network is looking to squander its trust by adding inaccurate “content” generated by a large language model.

The most obvious solution is to start up a brand new resource for front-end developers. But there are two probems with that:

  1. It’s really, really, really hard work, and
  2. It feels a bit 927.

I actually think there are plenty of good articles and resources on front-end development being published. But they’re not being published in any one specific place. People are publishing them on their own websites.

Ahmed, Josh, Stephanie, Andy, Lea, Rachel, Robin, Michelle …I could go on, but you get the picture.

All this wonderful stuff is distributed across the web. If you have a well-stocked RSS reader, you’re all set. But if you’re new to front-end development, how do you know where to find this stuff? I don’t think you can rely on search, unless you have a taste for slop.

I think the solution lies not with some hand-wavey “AI” algorithm that burns a forest for every query. I think the solution lies with human curation.

I take inspiration from Phil’s fantastic project, ooh.directory. Imagine taking that idea of categorisation and applying it to front-end dev resources.

Whether it’s a post on web.dev, Smashing Magazine, or someone’s personal site, it could be included and categorised appropriately.

Now, there would still be a lot of work involved, especially in listing and categorising the articles that are already out there, but it wouldn’t be nearly as much work as trying to create those articles from scratch.

I don’t know what the categories should be. Does it make sense to have top-level categories for HTML, CSS, and JavaScript, with sub-directories within them? Or does it make more sense to categorise by topics like accessibility, animation, and so on?

And this being the web, there’s no reason why one article couldn’t be tagged to simultaneously live in multiple categories.

There’s plenty of meaty information architecture work to be done. And there’d be no shortage of ongoing work to handle new submissions.

A stretch goal could be the creation of “playlists” of hand-picked articles. “Want to get started with CSS grid layout? Read that article over there, watch this YouTube video, and study this page on MDN.”

What do you think? Does this one-stop shop of hyperlinks sound like it would be useful? Does it sound feasible?

I’m just throwing this out there. I’d love it if someone were to run with it.

Filters

My phone rang today. I didn’t recognise the number so although I pressed the big button to answer the call, I didn’t say anything.

I didn’t say anything because usually when I get a call from a number I don’t know, it’s some automated spam. If I say nothing, the spam voice doesn’t activate.

But sometimes it’s not a spam call. Sometimes after a few seconds of silence a human at the other end of the call will say “Hello?” in an uncertain tone. That’s the point when I respond with a cheery “Hello!” of my own and feel bad for making this person endure those awkward seconds of silence.

Those spam calls have made me so suspicious that real people end up paying the price. False positives caught in my spam-detection filter.

Now it’s happening on the web.

I wrote about how Google search, Bing, and Mozilla Developer network are squandering trust:

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

But it’s not just limited to specific companies. I’ve noticed more and more suspicion related to any online activity.

I’ve seen members of a community site jump to the conclusion that a new member’s pattern of behaviour was a sure sign that this was a spambot. But it could just as easily have been the behaviour of someone who isn’t neurotypical or who doesn’t speak English as their first language.

Jessica was looking at some pictures on an AirBnB listing recently and found herself examining some photos that seemed a little too good to be true, questioning whether they were in fact output by some generative tool.

Every email that lands in my inbox is like a little mini Turing test. Did a human write this?

Our guard is up. Our filters are activated. Our default mode is suspicion.

This is most apparent with web search. We’ve always needed to filter search results through our own personal lenses, but now it’s like playing whack-a-mole. First we have to find workarounds for avoiding slop, and then when we click through to a web page, we have to evaluate whether’s it’s been generated by some SEO spammer making full use of the new breed of content-production tools.

There’s been a lot of hand-wringing about how this could spell doom for the web. I don’t think that’s necessarily true. It might well spell doom for web search, but I’m okay with that.

Back before its enshittification—an enshittification that started even before all the recent AI slop—Google solved the problem of accurate web searching with its PageRank algorithm. Before that, the only way to get to trusted information was to rely on humans.

Humans made directories like Yahoo! or DMOZ where they categorised links. Humans wrote blog posts where they linked to something that they, a human, vouched for as being genuinely interesting.

There was life before Google search. There will be life after Google search.

Look, there’s even a new directory devoted to cataloging blogs: websites made by humans. Life finds a way.

All of the spam and slop that’s making us so suspicious may end up giving us a new appreciation for human curation.

It wouldn’t be a straightforward transition to move away from search. It would be uncomfortable. It would require behaviour change. People don’t like change. But when needs must, people adapt.

The first bit of behaviour change might be a rediscovery of bookmarks. It used to be that when you found a source you trusted, you bookmarked it. Browsers still have bookmarking functionality but most people rely on search. Maybe it’s time for a bookmarking revival.

A step up from that would be using a feed reader. In many ways, a feed reader is a collection of bookmarks, but all of the bookmarks get polled regularly to see if there are any updates. I love using my feed reader. Everything I’ve subscribed to in there is made by humans.

The ultimate bookmark is an icon on the homescreen of your phone or in the dock of your desktop device. A human source you trust so much that you want it to be as accessible as any app.

Right now the discovery mechanism for that is woeful. I really want that to change. I want a web that empowers people to connect with other people they trust, without any intermediary gatekeepers.

The evangelists of large language models (who may coincidentally have invested heavily in the technology) like to proclaim that a slop-filled future is inevitable, as though we have no choice, as though we must simply accept enshittification as though it were a force of nature.

But we can always walk away.

Trust

In their rush to cram in “AI” “features”, it seems to me that many companies don’t actually understand why people use their products.

Google is acting as though its greatest asset is its search engine. Same with Bing.

Mozilla Developer Network is acting as though its greatest asset is its documentation. Same with Stack Overflow.

But their greatest asset is actually trust.

If I use a search engine I need to be able to trust that the filtering is good. If I look up documentation I need to trust that the information is good. I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”

“But”, the apologists will respond, “the results are mostly correct! The documentation is mostly true!”

Sure, but as Terence puts it:

The intern who files most things perfectly but has, more than once, tipped an entire cup of coffee into the filing cabinet is going to be remembered as “that klutzy intern we had to fire.”

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

I am honestly astonished that so many companies don’t seem to realise what they’re destroying.

Labels

I love libraries. I think they’re one of humanity’s greatest inventions.

My local library here in Brighton is terrific. It’s well-stocked, it’s got a welcoming atmosphere, and it’s in a great location.

But it has an information architecture problem.

Like most libraries, it’s using the Dewey Decimal system. It’s not a great system, but every classification system is going to have flaws—wherever you draw boundaries, there will be disagreement.

The Dewey Decimal class of 900 is for history and geography. Within that class, those 100 numbers (900 to 999) are further subdivded in groups of 10. For example, everything from 940 to 949 is for the history of Europe.

Dewey Decimal number 941 is for the history of the British Isles. The term “British Isles” is a geographical designation. It’s not a good geographical designation, but technically it’s not a political term. So it’s actually pretty smart to use a geographical rather than a political term for categorisation: geology moves a lot slower than politics.

But the Brighton Library is using the wrong label for their shelves. Everything under 941 is labelled “British History.”

The island of Ireland is part of the British Isles.

The Republic of Ireland is most definitely not part of Britain.

Seeing books about the history of Ireland, including post-colonial history, on a shelf labelled “British History” is …not good. Frankly, it’s offensive.

(I mentioned this situation to an English friend of mine, who said “Well, Ireland was once part of the British Empire”, to which I responded that all the books in the library about India should also be filed under “British History” by that logic.)

Just to be clear, I’m not saying there’s a problem with the library using the Dewey Decimal system. I’m saying they’re technically not using the system. They’ve deviated from the system’s labels by treating “History of the British Isles” and “British History” as synonymous.

I spoke to the library manager. They told me to write an email. I’ve written an email. We’ll see what happens.

You might think I’m being overly pedantic. That’s fair. But the fact this is happening in a library in England adds to the problem. It’s not just technically incorrect, it’s culturally clueless.

Mind you, I have noticed that quite a few English people have a somewhat fuzzy idea about the Republic of Ireland. Like, they understand it’s a different country, but they think it’s a different country in the way that Scotland is a different country, or Wales is a different country. They don’t seem to grasp that Ireland is a different country like France is a different country or Germany is a different country.

It would be charming if not for, y’know, those centuries of subjugation, exploitation, and forced starvation.

British history.

Update: They fixed it!

UX London 2024, day one

UX London is just two months away!

The best way to enjoy the event is to go for all three days but if that’s not doable for you, each individual day is kind of like a mini-conference with its own theme.

The theme on day one, Tuesday, June 18th is design research.

In the morning there are four fantastic talks.

A bearded short-haired man with light skin smiling outdoors amongst greenery. A smiling woman with dark hair with a yellow flower in it wearing an orange top outdoors in a sunny pastoral setting. A portrait of Aleks outdoors turning the camera with a smile. A smiling light-skinned woman with medium length hair and a colourful green top in front of a stucco wall.
  1. Tom Kerwin kicks things off with his talk on pitch provocations. Tom will show you how to probe for what the market really wants with his fast, counterintuitive method.
  2. Clarissa Gardner is giving a talk about ethics and safeguarding in UX research . Clarissa will show you how to uphold good practice without compromising delivery in a fast-paced environment.
  3. Aleks Melnikova’s talk is all about demystifying inclusive research. Aleks will show you how to conduct research for a diverse range of participants, from recruitment and planning through to moderation and analysis.
  4. Emma Boulton closes out the morning with her talk on meeting Product where they are. Emma will show you how to define a knowledge management strategy for your organisation so that you can retake your seat at the table.

After lunch you’ll take part in one of four workshops. Choose wisely!

A white man with short hair and a bit of a ginger beard with a twinkle in his eye, wearing a plaid shirt. A fair-skinned woman with long hair smiling. Close up of a smiling light-skinned woman wearing glasses with long red hair. A man with short hair and glasses with a neutral expression on his face.
  1. Luke Hay is running a workshop on bridging the gap between research and design. Luke will show you how to take practical steps to ensure that designers and researchers are working as a seamless team.
  2. Serena Verdenicci is running a workshop on behaviorual intentions. Serena will show you how to apply a behavioural mindset to your work so you can create behaviour-change interventions.
  3. Stéphanie Walter is running a workshop on designing adaptive reusable components and pages. Stéphanie will show you how to plan your content and information architecture to help build more reusable components.
  4. Ben Sauer is running a workshop on the storytelling bridge. Ben will show you how to find your inner storyteller to turn your insights into narratives your stakeholders can understand quickly and easily.

After your workshop there’s one final closing keynote to bring everyone back together. I’m keeping that secret for just a little longer, but trust me, it’s going to be inspiring—plenty to discuss at the drinks reception afterwards.

That’s quite a packed day. If design research is what you’re into, you won’t want to miss it. Get your ticket now.

Just to sweeten the deal and as a reward for reading all the way to the end, here’s a discount code you can use to get a whopping 20% off: JOINJEREMY.

Multi-page web apps

I received this email recently:

Subject: multi-page web apps

Hi Jeremy,

lately I’ve been following you through videos and texts and I’m curious as to why you advocate the use of multi-page web apps and not single-page ones.

Perhaps you can refer me to some sources where your position and reasoning is evident?

Here’s the response I sent…

Hi,

You can find a lot of my reasoning laid out in this (short and free) online book I wrote called Resilient Web Design:

https://resilientwebdesign.com/

The short answer to your question is this: user experience.

The slightly longer answer…

For most use cases, a website (or multi-page app if you prefer) is going to provide the most robust experience for the most number of users. That’s because a user’s web browser takes care of most of the heavy lifting.

Navigating from one page to another? That’s taken care of with links.

Gathering information from a user to process on a server? That’s taken care of with forms.

This frees me up to concentrate on the content and the design without having to reinvent the wheels of links and form fields.

These (let’s call them) multi-page apps are stateless, and for most use cases that’s absolutely fine.

There are some cases where you’d want a state to persist across pages. Let’s say you’re playing a song, or a podcast episode. Ideally you’d want that player to continue seamlessly playing even as the user navigates around the site. In that situation, a single-page app would be a suitable architecture.

But that architecture comes at a cost. Now you’ve got stop the browser doing what it would normally do with links and forms. It’s up to you to recreate that functionality. And you can’t do it with HTML, a robust fault-tolerant declarative language. You need to reimplement all that functionality in JavaScript, a less tolerant, more brittle language.

Then you’ve got to ship all that code to the user before they can use your site. It might be JavaScript code you’ve written yourself or it might be a third-party library designed for building single-page apps. Either way, the user pays a download tax (and a parsing tax, and an execution tax). Whereas with links and forms, all of that functionality is pre-bundled into the user’s web browser.

So that’s my reasoning. At least nine times out of ten, a multi-page approach is leaner, more robust, and simpler.

Like I said, there are times when a single-page approach makes sense—it all comes down to whether state needs to be constantly preserved. But these use cases are the exceptions, not the rule.

That’s why I find the framing of your question a little concerning. It should be inverted. The default approach should be to assume a multi-page approach (which is the way the web works by default). Deciding to take a JavaScript-driven single-page approach should be the exception.

It’s kind of like when people ask, “Why don’t you have children?” Surely the decision to have a child should require deliberation and commitment, rather than the other way around.

When it comes to front-end development, I’m worried that we’ve reached a state where the more complex over-engineered approach is viewed as the default.

I may be committing a fundamental attribution error here, but I think that we’ve reached this point not because of any consideration for users, but rather because of how it makes us developers feel. Perhaps building an old-fashioned website that uses HTML for navigations feels too easy, like it’s beneath us. But building an “app” that requires JavaScript just to render text on a screen feels like real programming.

I hope I’m wrong. I hope that other developers will start to consider user experience first and foremost when making architectural decisions.

Anyway. That’s my answer. User experience.

Cheers,

Jeremy

Permission

Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?

Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.

What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?

Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.

In short order, search came to rule the web. And Google came to rule search.

The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.

Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.

That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).

Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.

With AI, tech has broken the web’s social contract:

Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.

As Chris puts it:

Me, I just think it’s fuckin’ rude.

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

Ben proposes an update to robots.txt that would allow us to specify licensing information:

Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.

Like crawl my site only to provide search results not train your LLM.

It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.

Or do they?

There is still the nuclear option in robots.txt:

User-agent: Googlebot
Disallow: /

That’s what Vasilis is doing:

I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.

The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.

I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.

And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”

The situation with Google reminds me of what Robin said about Twitter:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

We can rescind our invitation to Google.

Browser history

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Browser bugs like these are very frustrating. There’s nothing you can do from your side other than filing a bug. The locus of control is very much with the developers of the browser.

Still, the occasional regression in a browser is a price I’m willing to pay for a plurality of rendering engines. Call me old-fashioned but I still value the ecological impact of browser diversity.

That said, I understand the argument for converging on a single rendering engine. I don’t agree with it but I understand it. It’s like this…

Back in the bad old days of the original browser wars, the browser companies just made shit up. That made life a misery for web developers. The Web Standards Project knocked some heads together. Netscape and Microsoft would agree to support standards.

So that’s where the bar was set: browsers agreed to work to the same standards, but competed by having different rendering engines.

There’s an argument to be made for raising that bar: browsers agree to work to the same standards, and have the same shared rendering engine, but compete by innovating in all other areas—the browser chrome, personalisation, privacy, and so on.

Like I said, I understand the argument. I just don’t agree with it.

One reason for zeroing in a single rendering engine is that it’s just too damned hard to create or maintain an entirely different rendering engine now that web standards are incredibly powerful and complex. Only a very large company with very deep pockets can hope to be a rendering engine player. Google. Apple. Heck, even Microsoft threw in the towel and abandoned their rendering engine in favour of Blink and V8.

And yet. Andreas Kling recently wrote about the Ladybird browser. How we’re building a browser when it’s supposed to be impossible:

The ECMAScript, HTML, and CSS specifications today are (for the most part) stellar technical documents whose algorithms can be implemented with considerably less effort and guesswork than in the past.

I’ll be watching that project with interest. Not because I plan to use the brower. I’d just like to see some evidence against the complexity argument.

Meanwhile most other browser projects are building on the raised bar of a shared browser engine. Blisk, Brave, and Arc all use Chromium under the hood.

Arc is the most interesting one. Built by the wonderfully named Browser Company of New York, it’s attempting to inject some fresh thinking into everything outside of the rendering engine.

Experiments like Arc feel like they could have more in common with tools-for-thought software like Obsidian and Roam Research. Those tools build knowledge graphs of connected nodes. A kind of hypertext of ideas. But we’ve already got hypertext tools we use every day: web browsers. It’s just that they don’t do much with the accumulated knowledge of our web browsing. Our browsing history is a boring reverse chronological list instead of a cool-looking knowledge graph or timeline.

For inspiration we can go all the way back to Vannevar Bush’s genuinely seminal 1945 article, As We May Think. Bush imagined device, the Memex, was a direct inspiration on Douglas Engelbart, Ted Nelson, and Tim Berners-Lee.

The article describes a kind of hypertext machine that worked with microfilm. Thanks to Tim Berners-Lee’s World Wide Web, we now have a global digital hypertext system that we access every day through our browsers.

But the article also described the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.

Our browsing histories are a kind of associative trail. They’re as unique as fingerprints. Even if everyone in the world started on the same URL, our browsing histories would quickly diverge.

Bush imagined that these associative trails could be shared:

The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Heck, making a useful browsing history could be a real skill:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record.

Taking something personal and making it public isn’t a new idea. It was what drove the wave of web 2.0 startups. Before Flickr, your photos were private. Before Delicous, your bookmarks were private. Before Last.fm, what music you were listening to was private.

I’m not saying that we should all make our browsing histories public. That would be a security nightmare. But I am saying there’s a lot of untapped potential in our browsing histories.

Let’s say we keep our browsing histories private, but make better use of them.

From what I’ve seen of large language model tools, the people getting most use of out of them are training them on a specific corpus. Like, “take this book and then answer my questions about the characters and plot” or “take this codebase and then answer my questions about the code.” If you treat these chatbots as calculators for words they can be useful for some tasks.

Large language model tools are getting smaller and more portable. It’s not hard to imagine one getting bundled into a web browser. It feeds on your browsing history. The bigger your browsing history, the more useful it can be.

Except, y’know, for the times when it just make shit up.

Vannevar Bush didn’t predict a Memex that would hallucinate bits of microfilm that didn’t exist.

The past is a foreign country

I tried watching a classic Western this weekend, How The West Was Won. I did not make it far. Let’s just say that in the first few minutes, the Spencer Tracy voiceover that accompanies the sweeping vistas sets out an attitude toward the indigenous population that would not fly today.

It’s one thing to be repulsed by a film from another era, but it’s even more uncomfortable to revisit the films from your own teenage years.

Tim Carmody has written about the real hero of Top Gun:

Iceman’s concern for Maverick and the safety of his fighter unit is totally understandable. He tries, however awkwardly, to discuss Goose’s death with Maverick. There’s no discussion of blame. And when they’re assigned to fly into combat together, Iceman briefly and discreetly raises the issue of Maverick’s fitness to fly with his superior officer and withdraws his concern once a decision is made.

I know someone who didn’t watch Ferris Bueller’s Day Off until they were well into adulthood. Their sympathies lay squarely with Dean Rooney.

And I think we can all agree in hindsight that Walter Peck was completely correct in his assessment of the dangers in Ghostbusters.

Oh, and The Karate Kid was the real bully.

This week, George wrote I’ve fallen out of love with Indiana Jones. Indy’s attitude of “it belongs in a museum” is the same worldview that got the Parthenon Marbles into the British Museum (instead of, y’know, the Parthenon where they belong).

Adrian Hon invites us to imagine what it would be like if the tables were turned. He wrote a short piece of speculative fiction called The Taking of Stonehenge:

We selected these archaeological sites based on their importance to our collective understanding of human and galactic history, and their immediate risk of irreparable harm from pollution, climate change, neglect, and looting. We are sympathetic to claims that preserving these sites in their “original” context is important, but our duty of care outweighs such emotional considerations.

These were my jams

This Is My Jam was a lovely website. Created by Hannah and Matt in 2011, it ran until 2015, at which point they had to shut it down. But they made sure to shut it down with care and consideration.

In many ways, This Is My Jam was the antithesis of the prevailing Silicon Valley mindset. Instead of valuing growth and scale above all else, it was deliberately thoughtful. Rather than “maximising engagement”, it asked you to slow down and just share one thing: what piece of music are you really into right now? It was up to you to decide whether “right now” meant this year, this month, this week, or this day.

I used to post songs there sporadically. Here’s a round-up of the twelve songs I posted in 2013. There was always some reason for posting a particular piece of music.

I was reminded of This Is My Jam recently when I logged into Spotify (not something I do that often). As part of the site’s shutdown, you could export all your jams into a Spotify playlist. Here’s mine.

Listening back to these 50 songs all these years later gave me the warm fuzzies.

One morning in the future

I had a video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.

“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”

There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.

It also made me stop and think about how amazing it was that we were having the call in the first place. I remembered Arthur C. Clarke’s predictions from 1964:

I’m thinking of the incredible breakthrough which has been possible by developments in communications, particularly the transistor and, above all, the communications satellite.

These things will make possible a world in which we can be in instant contact with each other wherever we may be, where we can contact our friends anywhere on Earth even if we don’t know their actual physical location.

It will be possible in that age—perhaps only 50 years from now—for a man to conduct his business from Tahiti or Bali just as well as he could from London.

The casual sexism of assuming that it would be a “man” conducting business hasn’t aged well. And it’s not the communications satellite that enabled my video call, but old-fashioned undersea cables, many in the same locations as their telegraphic antecedents. But still; not bad, Arthur.

After my call, I caught up on some email. There was a new newsletter from Ariel who’s currently in Antarctica.

Just thinking about the fact that I know someone who’s in Antarctica—who sent me a postcard from Antarctica—gave me another rush of feeling like I was living in the future. As I started to read the contents of the latest newsletter, that feeling became even more specific. Doesn’t this sound exactly like something straight out of a late ’80s/early ’90s cyberpunk novel?

Four of my teammates head off hiking towards the mountains to dig holes in the soil in hopes of finding microscopic animals contained within them. I hang back near the survival bags with the remaining teammate and begin unfolding my drone to get a closer look at the glaciers. After filming the textures of the land and ice from multiple angles for 90 minutes, my batteries are spent, my hands are cold and my stomach is growling. I land the drone, fold it up into my bright yellow Pelican case, and pull out an expired granola bar to keep my hunger pangs at bay.

Tweaking navigation labelling

I’ve always liked the idea that your website can be your API. Like, you’ve already got URLs to identify resources, so why not make that URL structure predictable and those resources parsable?

That’s why the (read-only) API for The Session doesn’t live at a separate subdomain. It uses the same URL structure as the regular site, but you can request the resources in an alternative format: JSON, XML, RSS.

This works out pretty well, mostly because I put a lot of thought into the URL structure of the site. I’m something of a URL fetishist, but I think that taking a URL-first approach to information architecture can be a good exercise.

Most of the resources on The Session involve nouns like tunes, events, discussions, and so on. There’s a consistent and predictable structure to the URLs for those sections:

  • /things
  • /things/new
  • /things/search

And then an idividual item can be found at:

  • things/ID

That’s all nice and predictable and the naming of the URLs matches what you’d expect to find:

Tunes, events, discussions, sessions. Those are all fine. But there’s one section of the site that has this root URL:

/recordings

When I was coming up with the URL structure twenty years ago, it was clear what you’d find there: track listings for albums of music. No one would’ve expected to find actual recordings of music available to listen to on-demand. The bandwidth constraints and technical limitations of the time made that clear.

Two decades on, the situation has changed. Now someone new to the site might well expect to hit a link called “recordings” and expect to hear actual recordings of music.

So I should probably change the label on the link. I don’t think “albums” is quite right—what even is an album any more? The word “discography” is probably the most appropriate label.

Here’s my dilemma: if I update the label, should I also update the URL structure?

Right now, the section of the site with /tunes URLs is labelled “tunes”. The section of the site with /events URLs is labelled “events”. Currently the section of the site with /recordings URLs is labelled “recordings”, but may soon be labelled “discography”.

If you click on “tunes”, you end up at /tunes. But if you click on “discography”, you end up at /recordings.

Is that okay? Am I the only one that would be bothered by that?

I could update the URLs to match the labelling (with redirects for the old URLs, of course), but I’m not so keen on this URL structure:

  • /discography
  • /discography/new
  • /discography/search
  • /discography/ID

It doesn’t seem as tidy as:

  • /recordings
  • /recordings/new
  • /recordings/search
  • /recordings/ID

But if I don’t update the URLs to match the label, then I’m just going to have to live with the mismatch.

I’m just thinking out loud here. I think I should definitely update the label. I just won’t make any decision on changing URLs for a while yet.

Tweaking navigation sizing

Gerry talks about “top tasks” a lot. He literally wrote the book on it:

Top tasks are what matter most to your customers.

Seems pretty obvious, right? But it’s actually pretty rare to see top tasks presented any differently than other options.

Look at the global navigation on most websites. Typically all the options are given equal prominence. Even the semantics under the hood often reflect this egalitarian ideal, with each list in an unordered list. All the navigation options are equal, but I bet that the reality for most websites is that some navigation options are more equal than others.

I’ve been guilty of this on The Session. The site-wide navigation shows a number of options: tunes, events, discussions, etc. Each one is given equal prominence, but I can tell you without even looking at my server logs that 90% of the traffic goes to the tunes section—that’s the beating heart of The Session. That’s why the home page has a search form that defaults to searching for tunes.

I wanted the navigation to reflect the reality of what people are coming to the site for. I decided to make the link to the tunes section more prominent by bumping up the font size a bit.

I was worried about how weird this might look; we’re so used to seeing all navigation items presented equally. But I think it worked out okay (though it might take a bit of getting used to if you’re accustomed to the previous styling). It helps that “tunes” is a nice short word, so bumping up the font size on that word doesn’t jostle everything else around.

I think this adjustment is working well for this situation where there’s one very clear tippy-top task. I wouldn’t want to apply it across the board, making every item in the navigation proportionally bigger or smaller depending on how often it’s used. That would end up looking like a ransom note.

But giving one single item prominence like this tweaks the visual hierarchy just enough to favour the option that’s most likely to be what a visitor wants.

That last bit is crucial. The visual adjustment reflects what visitors want, not what I want. You could adjust the size of a navigation option that you want to drive traffic to, but in the long run, all you’re going to do is train people to trust your design less.

You don’t get to decide what your top task is. The visitors to your website do. Trying to foist an arbitrary option on them would be the tail wagging the dog.

Anway, I’m feeling a lot better about the site-wide navigation on The Session now that it reflects reality a little bit more. Heck, I may even bump that font size up a little more.

The audio from dConstruct 2022

dConstruct 2022 was great fun. It was also the last ever dConstruct.

If you were there, and you’d like to re-live the magic, the audio from the talks is now available on the dConstruct Archive. Here they are:

Thanks to some service worker magic, you can select any of those talks for offline listening later.

The audio is also available on Huffduffer on the dConstruct Huffduffer account. Here’s the RSS feed that you can pop into your podcast software of choice.

If you’re more of a visual person, you can watch videos of the slides synced with the audio. They’ve all got captions too (good ones, not just automatically generated).

So have a listen in whichever way you prefer.

Now that I’ve added the audio from the last dConstruct to the dConstruct archive, it feels like the closing scene of Raiders Of The Lost Ark. Roll credits.

Directory enquiries

I was talking to someone recently about a forgotten battle in the history of the early web. It was a battle between search engines and directories.

These days, when the history of the web is told, a whole bunch of services get lumped into the category of “competitors who lost to Google search”: Altavista, Lycos, Ask Jeeves, Yahoo.

But Yahoo wasn’t a search engine, at least not in the same way that Google was. Yahoo was a directory with a search interface on top. You could find what you were looking for by typing or you could zero in on what you were looking for by drilling down through a directory structure.

Yahoo wasn’t the only directory. DMOZ was an open-source competitor. You can still experience it at DMOZlive.com:

The official DMOZ.com site was closed by AOL on February 17th 2017. DMOZ Live is committed to continuing to make the DMOZ Internet Directory available on the Internet.

Search engines put their money on computation, or to use today’s parlance, algorithms (or if you’re really shameless, AI). Directories put their money on humans. Good ol’ information architecture.

It turned out that computation scaled faster than humans. Search won out over directories.

Now an entire generation has been raised in the aftermath of this battle. Monica Chin wrote about how this generation views the world of information:

Catherine Garland, an astrophysicist, started seeing the problem in 2017. She was teaching an engineering course, and her students were using simulation software to model turbines for jet engines. She’d laid out the assignment clearly, but student after student was calling her over for help. They were all getting the same error message: The program couldn’t find their files.

Garland thought it would be an easy fix. She asked each student where they’d saved their project. Could they be on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. “What are you talking about?” multiple students inquired. Not only did they not know where their files were saved — they didn’t understand the question.

Gradually, Garland came to the same realization that many of her fellow educators have reached in the past four years: the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students.

Dr. Saavik Ford confirms:

We are finding a persistent issue with getting (undergrad, new to research) students to understand that a file/directory structure exists, and how it works. After a debrief meeting today we realized it’s at least partly generational.

We live in a world ordered only by search:

While some are quite adept at using labels, tags, and folders to manage their emails, others will claim that there’s no need to do because you can easily search for whatever you happen to need. Save it all and search for what you want to find. This is, roughly speaking, the hot mess approach to information management. And it appears to arise both because search makes it a good-enough approach to take and because the scale of information we’re trying to manage makes it feel impossible to do otherwise. Who’s got the time or patience?

There are still hold-outs. You can prise files from Scott Jenson’s cold dead hands.

More recently, Linus Lee points out what we’ve lost by giving up on directory structures:

Humans are much better at choosing between a few options than conjuring an answer from scratch. We’re also much better at incrementally approaching the right answer by pointing towards the right direction than nailing the right search term from the beginning. When it’s possible to take a “type in a query” kind of interface and make it more incrementally explorable, I think it’s almost always going to produce a more intuitive and powerful interface.

Directory structures still make sense to me (because I’m old) but I don’t have a problem with search. I do have a problem with systems that try to force me to search when I want to drill down into folders.

I have no idea what Google Drive and Dropbox are doing but I don’t like it. They make me feel like the opposite of a power user. Trying to find a file using their interfaces makes me feel like I’m trying to get a printer to work. Randomly press things until something happens.

Anyway. Enough fist-shaking from me. I’m going to ponder Linus’s closing words. Maybe defaulting to a search interface is a cop-out:

Text search boxes are easy to design and easy to add to apps. But I think their ease on developers may be leading us to ignore potential interface ideas that could let us discover better ideas, faster.

Even more UX London speaker updates

I’ve added five more faces to the UX London line-up.

Irina Rusakova will be giving a talk on day one, the day that focuses on research. Her talk on designing with the autistic community is one I’m really looking forward to.

Also on day one, my friend and former Clearleftie Cennydd Bowles will be giving a workshop called “What could go wrong?” He literally wrote the book on ethical design.

Day two is all about creation. My co-worker Chris How will be speaking. “Nepotism!” you cry! But no, Chris is speaking because I had the chance to his talk—called “Unexpectedly obvious”—and I thought “that’s perfect for UX London!”:

Let him take you on a journey through time and across the globe sharing stories of designs that solve problems in elegant if unusual ways.

Also on day two, you’ve got two additional workshops. Lou Downe will be running a workshop on designing good services, and Giles Turnbull will be running a workshop called “Writing for people who hate writing.”

I love that title! Usually when I contact speakers I don’t necessarily have a specific talk or workshop in mind, but I knew that I wanted that particular workshop from Giles.

When I wrote to Giles to ask come and speak, I began by telling how much I enjoy his blog—I’m a long-time suscriber to his RSS feed. He responded and said that he also reads my blog—we’re blog buddies! (That’s a terrible term but there should be a word for people who “know” each other only through reading each other’s websites.)

Anyway, that’s another little treasure trove of speakers added to the UX London roster:

That’s nineteen speakers already and we’re not done yet—expect further speaker announcements soon. But don’t wait on those announcements before getting your ticket. Get yours now!

More UX London speaker updates

It wasn’t that long ago that I told you about some of the speakers that have been added to the line-up for UX London in June: Steph Troeth, Heldiney Pereira, Lauren Pope, Laura Yarrow, and Inayaili León. Well, now I’ve got another five speakers to tell you about!

Aleks Melnikova will be giving a workshop on day one, June 28th—that’s the day with a focus on research.

Stephanie Marsh—who literally wrote the book on user research—will also be giving a workshop that day.

Before those workshops though, you’ll get to hear a talk from the one and only Kat Zhou, the creator of Design Ethically. By the way, you can hear Kat talking about deceptive design in a BBC radio documentary.

Day two has a focus on content design so who better to deliver a workshop than Sarah Winters, author of the Content Design book.

Finally, on day three—with its focus on design systems—I’m thrilled to announce that Adekunle Oduye will be giving a talk. He too is an author. He co-wrote the Design Engineering Handbook. I also had the pleasure of talking to Adekunle for an episode of the Clearleft podcast on design engineering.

So that’s another five excellent speakers added to the line-up:

That’s a total of fifteen speakers so far with more on the way. And I’ll be updating the site with more in-depth descriptions of the talks and workshops soon.

If you haven’t yet got your ticket for UX London, grab one now. You can buy tickets for individual days, or to get the full experience and the most value, get a ticket for all three days.

UX London speaker updates

If you’ve signed up to the UX London newsletter then this won’t be news to you, but more speakers have been added to the line-up.

Steph Troeth will be giving a workshop on day one. That’s the day with a strong focus on research, and when it comes to design research, Steph is unbeatable. You can hear some of her words of wisdom in an episode of the Clearleft podcast all about design research.

Heldiney Pereira will be speaking on day two. That’s the day with a focus on content design. Heldiney previously spoke at our Content By Design event and it was terrific—his perspective on content design as a product designer is invaluable.

Lauren Pope will also be on day two. She’ll be giving a workshop. She recently launched a really useful content audit toolkit and she’ll be bringing that expertise to her UX London workshop.

Day three is going to have a focus on design systems (and associated disciplines like design engineering and design ops). Both Laura Yarrow and Inayaili León will be giving talks on that day. You can expect some exciting war stories from the design system trenches of HM Land Registry and GitHub.

I’ve got some more speakers confirmed but I’m going to be a tease and make you wait a little longer for those names. But check out the line-up so far! This going to be such an excellent event (I know I’m biased, but really, look at that line-up!).

June 28th to 30th. Tobacco Dock, London. Get your ticket if you haven’t already.

Curating UX London 2022

The first speakers are live on the UX London 2022 site! There are only five people announced for now—just enough to give you a flavour of what to expect. There will be many, many more.

Putting together the line-up of a three-day event is quite challenging, but kind of fun too. On the one hand, each day should be able to stand alone. After all, there are one-day tickets available. On the other hand, it should feel like one cohesive conference, not three separate events.

I’ve decided to structure the three days to somewhat mimic the design process…

The first day is all about planning and preparation. This is like the first diamond in the double-diamond process: building the right thing. That means plenty of emphasis on research.

The second day is about creation and execution. It’s like that second diamond: building the thing right. This could cover potentially everything but this year the focus will be on content design.

The third day is like the third diamond in the double dia— no, wait. The third day is about growing, scaling, and maintaining design. That means there’ll be quite an emphasis on topics like design systems and design engineering, maybe design ops.

But none of the days will be exclusively about a single topic. There are evergreen topics that apply throughout the process: product design, design ethics, inclusive design.

It’s a lot to juggle! But I’m managing to overcome choice paralysis and assemble a very exciting line-up indeed. Trust me—you won’t want to miss this!

Early bird tickets are available until February 28th. That’s just a few days away. I recommend getting your tickets now—you won’t regret it!

Quite a few people are bringing their entire teams, which is perfect. UX London can be both an educational experience and a team-bonding exercise. Let’s face it, it’s been too long since any of us have had a good off-site.

If you’re one of those lucky people who’s coming along (or if you’re planning to), I’m curious: given the themes mentioned above, are there specific topics that you’d hope to see covered? Drop me a line and let me know.

Also, if you read the description of the event and think “Oh, I know the perfect speaker!” then I’d love to hear from you. Maybe that speaker is you. (Although, cards on the table; if you look like me—another middle-age white man—I may take some convincing.)

Right. Time to get back to my crazy wall of conference curation.

02022-02-22

Eleven years ago, I made a prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

One year later, Matt called me on it and the prediction officially became a bet:

We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.

I’m very happy to lose this bet.

When I made the original prediction eleven years ago that a URL on the longbets.org site would no longer be available, I did so in a spirit of mischief—it was a deliberately meta move. But it was also informed by a genuine feeling of pessimism around the longevity of links on the web. While that pessimism was misplaced in this case, it was informed by data.

The lifetime of a URL on the web remains shockingly short. What I think has changed in the intervening years is that people may have become more accustomed to the situation. People used to say “once something is online it’s there forever!”, which infuriated me because the real problem is the exact opposite: if you put something online, you have to put in real effort to keep it online. After all, we don’t really buy domain names; we just rent them. And if you publish on somebody else’s domain, you’re at their mercy: Geocities, MySpace, Facebook, Medium, Twitter.

These days my view towards the longevity of online content has landed somewhere in the middle of the two dangers. There’s a kind of Murphy’s Law around data online: anything that you hope will stick around will probably disappear and anything that you hope will disappear will probably stick around.

One huge change in the last eleven years that I didn’t anticipate is the migration of websites to HTTPS. The original URL of the prediction used HTTP. I’m glad to see that original URL now redirects to a more secure protocol. Just like most of the World Wide Web. I think we can thank Let’s Encrypt for that. But I think we can also thank Edward Snowden. We are no longer as innocent as we were eleven years ago.

I think if I could tell my past self that most of the web would using HTTPS by 2022, my past self would be very surprised …’though not as surprised at discovering that time travel had also apparently been invented.

The Internet Archive has also been a game-changer for digital preservation. While it’s less than ideal that something isn’t reachable at its original URL, knowing that there’s probably a copy of the content at archive.org lessens the sting considerably. I couldn’t be happier that this fine institution is the recipient of the stakes of this bet.