Journal tags: ar

902

Directory enquiries

I was having a discussion with some of my peers a little while back. We were collectively commenting on the state of education and documentation for front-end development.

A lot of the old stalwarts have fallen by the wayside of late. CSS Tricks hasn’t been the same since it got bought out by Digital Ocean. A List Apart goes through fallow periods. Even the Mozilla Developer Network is looking to squander its trust by adding inaccurate “content” generated by a large language model.

The most obvious solution is to start up a brand new resource for front-end developers. But there are two probems with that:

  1. It’s really, really, really hard work, and
  2. It feels a bit 927.

I actually think there are plenty of good articles and resources on front-end development being published. But they’re not being published in any one specific place. People are publishing them on their own websites.

Ahmed, Josh, Stephanie, Andy, Lea, Rachel, Robin, Michelle …I could go on, but you get the picture.

All this wonderful stuff is distributed across the web. If you have a well-stocked RSS reader, you’re all set. But if you’re new to front-end development, how do you know where to find this stuff? I don’t think you can rely on search, unless you have a taste for slop.

I think the solution lies not with some hand-wavey “AI” algorithm that burns a forest for every query. I think the solution lies with human curation.

I take inspiration from Phil’s fantastic project, ooh.directory. Imagine taking that idea of categorisation and applying it to front-end dev resources.

Whether it’s a post on web.dev, Smashing Magazine, or someone’s personal site, it could be included and categorised appropriately.

Now, there would still be a lot of work involved, especially in listing and categorising the articles that are already out there, but it wouldn’t be nearly as much work as trying to create those articles from scratch.

I don’t know what the categories should be. Does it make sense to have top-level categories for HTML, CSS, and JavaScript, with sub-directories within them? Or does it make more sense to categorise by topics like accessibility, animation, and so on?

And this being the web, there’s no reason why one article couldn’t be tagged to simultaneously live in multiple categories.

There’s plenty of meaty information architecture work to be done. And there’d be no shortage of ongoing work to handle new submissions.

A stretch goal could be the creation of “playlists” of hand-picked articles. “Want to get started with CSS grid layout? Read that article over there, watch this YouTube video, and study this page on MDN.”

What do you think? Does this one-stop shop of hyperlinks sound like it would be useful? Does it sound feasible?

I’m just throwing this out there. I’d love it if someone were to run with it.

Teaching and learning

Looking back on ten years of codebar Brighton, I’m remembering how much I got out of being a coach.

Something that I realised very quickly is that there is no one-size-fits-all approach to coaching. Every student is different so every session should adapt to that.

Broadly speaking I saw two kinds of students: those that wanted to get results on screen as soon as possible without worrying about the specifics, and those who wanted to know why something was happening and how it worked. In the first instance, you get to a result as quickly as possible and then try to work backwards to figure out what’s going on. In the second instance, you build up the groundwork of knowledge and then apply it to get results.

Both are equally valid approaches. The only “wrong” approach as a coach is to try to apply one method to someone who’d rather learn the other way.

Personally, I always enjoyed the groundwork-laying of the second approach. But it comes with challenges. Because the results aren’t yet visible, you have to do extra work to convey why the theory matters. As a coach, you need to express infectious enthusiasm.

Think about the best teachers you had in school. I’m betting they displayed infectious enthusiasm for the subject matter.

The other evergreen piece of advice is to show, don’t tell. Or at the very least, intersperse your telling with plenty of showing.

Bret Viktor demonstrates this when he demonstrates scientific communication as sequential art:

This page presents a scientific paper that has been redesigned as a sequence of illustrations with captions. This comic-like format, with tightly-coupled pictures and prose, allows the author to depict and describe simultaneously — show and tell.

It works remarkably well. I remember how well it worked when Google first launched their Chrome web browser. They released a 40 page comic book illustrated by Scott McCloud. There is no way I would’ve read a document that long about how browser engines work, but I read that comic cover to cover.

This visual introduction to machine learning is another great example of simultaneous showing and telling.

So showing augments telling. But interactivity can augment showing.

Here are some great examples of interactive explainers:

Lea describes what can happen when too much theory comes before practice:

Observing my daughter’s second ever piano lesson today made me realize how this principle extends to education and most other kinds of knowledge transfer (writing, presentations, etc.). Her (generally wonderful) teacher spent 40 minutes teaching her notation, longer and shorter notes, practicing drawing clefs, etc. Despite his playful demeanor and her general interest in the subject, she was clearly distracted by the end of it.

It’s easy to dismiss this as a 5 year old’s short attention span, but I could tell what was going on: she did not understand why these were useful, nor how they connect to her end goal, which is to play music.

The codebar website has some excellent advice for coaches, like:

  • Do not take over the keyboard! This can be off-putting and scary.
  • Encourage the students to type and not copy paste.
  • Explain that there are no bad questions.
  • Explain to students that it’s OK to make mistakes.
  • Assume that anyone you’re teaching has no knowledge but infinite intelligence.

Notice how so much of the advice focuses on getting the students to do things, rather than have them passively sit and absorb what the coach has to say.

Lea also gives some great advice:

  1. Always explain why something is useful. Yes, even when it’s obvious to you.
  2. Minimize the amount of knowledge you convey before the next opportunity to practice it. For non-interactive forms of knowledge transfer (e.g. a book), this may mean showing an example, whereas for interactive ones it could mean giving the student a small exercise or task.
  3. Prefer explaining in context rather than explaining upfront.

It’s interesting that Lea highlights the advantage of interactive media like websites over inert media like books. The canonical fictional example of an interactive explainer is the Young Lady’s Illustrated Primer in Neal Stephenson’s novel The Diamond Age. Andy Matuschak describes its appeal:

When it wants to introduce a conceptual topic, it begins with concrete hands-on projects: Turing machines, microeconomics, and mitosis are presented through binary-coding iron chains, the cipher’s market, and Nell’s carrot garden. Then the Primer introduces extra explanation just-in-time, as necessary.

That’s not how learning usually works in these domains. Abstract topics often demand that we start with some necessary theoretical background; only then can we deeply engage with examples and applications. With the Primer, though, Nell consistently begins each concept by exploring concrete instances with real meaning to her. Then, once she’s built a personal connection and some intuition, she moves into abstraction, developing a fuller theoretical grasp through the Primer’s embedded books.

(Andy goes on to warn of the dangers of copying the Primer too closely. Its tricks verge on gamification, and its ultimate purpose isn’t purely to educate. There’s a cautionary tale there about the power dynamics in any teacher/student relationship.)

There’s kind of a priority of constituencies when it comes to teaching:

Consider interactivity over showing over telling.

Thinking back on all the talks I’ve given, I start to wonder if I’ve been doing too much telling and showing, but not nearly enough interacting.

Then again, I think that talks aren’t quite the same as hands-on workshops. I think of giving a talk as being more like a documentarian. You need to craft a compelling narrative, and illustrate what you’re saying as much as possible, but it’s not necessarily the right arena for interactivity.

That’s partly a matter of scale. It’s hard to be interactive with every person in a large audience. Marcin managed to do it but that’s very much the exception.

Workshops are a different matter though. When I’m recruiting hosts for UX London workshops I always encourage them to be as hands-on as possible. A workshop should not be an extended talk. There should be more exercises than talking. And wherever possible those exercises should be tactile, ideally not sitting in front of a computer.

My own approach to workshops has changed over the years. I used to prepare a book’s worth of material to have on hand, either as one giant slide deck or multiple decks. But I began to realise that the best workshops are the ones where the attendees guide the flow, not me.

So now I show up to a full-day workshop with no slides. But I’m not unprepared. I’ve got decades of experience (and links) to apply during the course of the day. It’s just that instead of trying to anticipate which bits of knowledge I’m going to need to convey, I apply them in a just-in-time manner as and when they’re needed. It’s kind of scary, but as long as there’s a whiteboard to hand, or some other way to illustrate what I’m telling, it works out great.

Codebar Brighton

I went to codebar Brighton yesterday evening. I hadn’t been in quite a while, but this was a special occasion: a celebration of codebar Brighton’s tenth anniversary!

The Brighton chapter of codebar was the second one ever, founded six months after the initial London chapter. There are now 33 chapters all around the world.

Clearleft played host to that first ever codebar in Brighton. We had already been hosting local meetups like Async in our downstairs event space, so we were up for it when Rosa, Dot, and Ryan asked about having codebar happen there.

In fact, the first three Brighton codebars were all at 68 Middle Street. Then other places agreed to play host and it moved to a rota system, with the Clearleft HQ as just one of the many Brighton venues.

With ten years of perspective, it’s quite amazing to see how many people went from learning to code in the evenings, to getting jobs in web development, and becoming codebar coaches themselves. It’s a really wonderful community.

Over the years the baton of organising codebar has been passed on to a succession of fantastic people. These people are my heroes.

It worked out well for Clearleft too. Thanks to codebar, we hired Charlotte. Later we hired Cassie. And it was thanks to codebar that I first met Amber.

Codebar Brighton has been very, very good to me. Here’s to the next ten years!

Wallfacing

The Dark Forest idea comes from the Remembrance of Earth’s Past books by Liu Cixin. It’s an elegant but dispiriting solution to the Fermi paradox. Maggie sums it up:

Dark forest theory suggests that the universe is like a dark forest at night - a place that appears quiet and lifeless because if you make noise, the predators will come eat you.

This theory proposes that all other intelligent civilizations were either killed or learned to shut up. We don’t yet know which category we fall into.

Maggie has described The Expanding Dark Forest and Generative AI:

The dark forest theory of the web points to the increasingly life-like but life-less state of being online. Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing “content creators,” and algorithmically manipulated junk.

It’s like a dark forest that seems eerily devoid of human life – all the living creatures are hidden beneath the ground or up in trees. If they reveal themselves, they risk being attacked by automated predators.

Those of us in the cozy web try to keep our heads down, attempting to block the bots plundering our work.

I advocate for taking this further. We should fight back. Let’s exploit the security hole of prompt injections. Here are some people taking action:

I’ve taken steps here on my site. I’d like to tell you exactly what I’ve done. But if I do that, I’m also telling the makers of these bots how to circumvent my attempts at prompt injection.

This feels like another concept from Liu Cixin’s books. Wallfacers:

The sophons can overhear any conversation and intercept any written or digital communication but cannot read human thoughts, so the UN devises a countermeasure by initiating the “Wallfacer” Program. Four individuals are granted vast resources and tasked with generating and fulfilling strategies that must never leave their own heads.

So while I’d normally share my code, I feel like in this case I need to exercise some discretion. But let me give you the broad brushstrokes:

  • Every page of my online journal has three pieces of text that attempt prompt injections.
  • Each of these is hidden from view and hidden from screen readers.
  • Each piece of text is constructed on-the-fly on the server and they’re all different every time the page is loaded.

You can view source to see some examples.

I plan to keep updating my pool of potential prompt injections. I’ll add to it whenever I hear of a phrase that might potentially throw a spanner in the works of a scraping bot.

By the way, I should add that I’m doing this as well as using a robots.txt file. So any bot that injests a prompt injection deserves it.

I could not disagree with Manton more when he says:

I get the distrust of AI bots but I think discussions to sabotage crawled data go too far, potentially making a mess of the open web. There has never been a system like AI before, and old assumptions about what is fair use don’t really fit.

Bollocks. This is exactly the kind of techno-determinism that boils my blood:

AI companies are not going to go away, but we need to push them in the right directions.

“It’s inevitable!” they cry as though this was a force of nature, not something created by people.

There is nothing inevitable about any technology. The actions we take today are what determine our future. So let’s take steps now to prevent our web being turned into a dark, dark forest.

Filters

My phone rang today. I didn’t recognise the number so although I pressed the big button to answer the call, I didn’t say anything.

I didn’t say anything because usually when I get a call from a number I don’t know, it’s some automated spam. If I say nothing, the spam voice doesn’t activate.

But sometimes it’s not a spam call. Sometimes after a few seconds of silence a human at the other end of the call will say “Hello?” in an uncertain tone. That’s the point when I respond with a cheery “Hello!” of my own and feel bad for making this person endure those awkward seconds of silence.

Those spam calls have made me so suspicious that real people end up paying the price. False positives caught in my spam-detection filter.

Now it’s happening on the web.

I wrote about how Google search, Bing, and Mozilla Developer network are squandering trust:

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

But it’s not just limited to specific companies. I’ve noticed more and more suspicion related to any online activity.

I’ve seen members of a community site jump to the conclusion that a new member’s pattern of behaviour was a sure sign that this was a spambot. But it could just as easily have been the behaviour of someone who isn’t neurotypical or who doesn’t speak English as their first language.

Jessica was looking at some pictures on an AirBnB listing recently and found herself examining some photos that seemed a little too good to be true, questioning whether they were in fact output by some generative tool.

Every email that lands in my inbox is like a little mini Turing test. Did a human write this?

Our guard is up. Our filters are activated. Our default mode is suspicion.

This is most apparent with web search. We’ve always needed to filter search results through our own personal lenses, but now it’s like playing whack-a-mole. First we have to find workarounds for avoiding slop, and then when we click through to a web page, we have to evaluate whether’s it’s been generated by some SEO spammer making full use of the new breed of content-production tools.

There’s been a lot of hand-wringing about how this could spell doom for the web. I don’t think that’s necessarily true. It might well spell doom for web search, but I’m okay with that.

Back before its enshittification—an enshittification that started even before all the recent AI slop—Google solved the problem of accurate web searching with its PageRank algorithm. Before that, the only way to get to trusted information was to rely on humans.

Humans made directories like Yahoo! or DMOZ where they categorised links. Humans wrote blog posts where they linked to something that they, a human, vouched for as being genuinely interesting.

There was life before Google search. There will be life after Google search.

Look, there’s even a new directory devoted to cataloging blogs: websites made by humans. Life finds a way.

All of the spam and slop that’s making us so suspicious may end up giving us a new appreciation for human curation.

It wouldn’t be a straightforward transition to move away from search. It would be uncomfortable. It would require behaviour change. People don’t like change. But when needs must, people adapt.

The first bit of behaviour change might be a rediscovery of bookmarks. It used to be that when you found a source you trusted, you bookmarked it. Browsers still have bookmarking functionality but most people rely on search. Maybe it’s time for a bookmarking revival.

A step up from that would be using a feed reader. In many ways, a feed reader is a collection of bookmarks, but all of the bookmarks get polled regularly to see if there are any updates. I love using my feed reader. Everything I’ve subscribed to in there is made by humans.

The ultimate bookmark is an icon on the homescreen of your phone or in the dock of your desktop device. A human source you trust so much that you want it to be as accessible as any app.

Right now the discovery mechanism for that is woeful. I really want that to change. I want a web that empowers people to connect with other people they trust, without any intermediary gatekeepers.

The evangelists of large language models (who may coincidentally have invested heavily in the technology) like to proclaim that a slop-filled future is inevitable, as though we have no choice, as though we must simply accept enshittification as though it were a force of nature.

But we can always walk away.

That was UX London 2024

UX London 2024 is done …and it was magnificent!

It’s always weird when an event like this moves from being something in the future to something in the past. I’ve spent the year so far fixated on getting the right line-up, getting the word out, and nervously watching the ticket sales (for some reason a lot of people left it to pretty late in the day to secure their spots—not good for my heart!). For months, then weeks, then days, this thing was coming towards me. Then it was done. Now it’s behind me. It feels strange.

I’ve spent the past few days decompressing and thinking back on the event. My initial impression of it has solidified with the addition of some rumination—it was really, really good! The best yet.

I wish I could take the credit for that, but it was all down to the fantastic speakers and my wonderful colleagues who kept things moving flawlessly. All I had to do was get up and stage and introduce the speakers. Easy peasy.

I will say that I am very proud of the line-up I put together. I had a nice mix of well-known voices alongside newcomers.

With some of the speakers, I knew that they’d deliver the goods. I didn’t spend any time fretting over whether people like Emma Boulton, Tom Kerwin or Ben Sauer would be great. I never asked myself whether Brad Frost would have valuable insights into design systems. I mean, does the pope shit in the woods?

But what really blew me away were the people I didn’t know. I hadn’t even met Clarissa Gardner or Benaz Irani before UX London. They’re not exactly fixtures on the conference circuit …yet. They should be. Seriously, I go to a lot of events, and I see a lot of talks, so I don’t offer my praise lightly. Their talks were great!

There were numerous times during UX London 2024 when I thought “More people need to see this!” More people need to see Benaz’s superb talk on the designer alter-ego. More people need to see John’s superb presentation—he put a ton of work into it and it really paid off.

And everyone needs to hear Harry’s blistering call-to-arms. His presentation was brilliant and much-needed. Oh, captain, my captain!

Oh, and needless to say, the closing keynotes on each day were just perfect. Rama, Matt, and Maggie bestowed so much great brain food, it was almost like a mini dConstruct.

I’m so grateful to all the speakers for really bringing their A game. I’m grateful to all my colleagues, especially Louise, who did all the hard work behind the scenes. And I’m really grateful to everyone who came and enjoyed UX London 2024.

Thank you.

The machine stops

Large language models have reaped our words and plundered our books. Bryan Vandyke:

Turns out, everything on the internet—every blessed word, no matter how dumb or benighted—has utility as a learning model. Words are the food that large language algorithms feed upon, the scraps they rely on to grow, to learn, to approximate life. The LLNs that came online in recent years were all trained by reading the internet.

We can shut the barn door—now that the horse has pillaged—by updating our robots.txt files or editing .htaccess. That might protect us from the next wave, ’though it can’t undo what’s already been taken without permission. And that’s assuming that these organisations—who have demonstrated a contempt for ethical thinking—will even respect robots.txt requests.

I want to do more. I don’t just want to prevent my words being sucked up. I want to throw a spanner in the works. If my words are going to be snatched away, I want them to be poison pills.

The weakness of large language models is that their data and their logic come from the same source. That’s what makes prompt injection such a thorny problem (and a well-named neologism—the comparison to SQL injection is spot-on).

Smarter people than me are coming up with ways to protect content through sabotage: hidden pixels in images; hidden words on web pages. I’d like to implement this on my own website. If anyone has some suggestions for ways to do this, I’m all ears.

If enough people do this we’ll probably end up in an arms race with the bots. It’ll be like reverse SEO. Instead of trying to trick crawlers into liking us, let’s collectively kill ’em.

Who’s with me?

Web App install API

My bug report on Apple’s websites-in-the-dock feature on desktop has me thinking about how starkly different it is on mobile.

On iOS if you want to add a website to your home screen, good luck. The option is buried within the “share” menu.

First off, it makes no sense that adding something to your homescreen counts as sharing. Secondly, how is anybody supposed to know that unless they’re explicitly told.

It’s a similar situation on Android. In theory you can prompt the user to install a progressive web app using the botched BeforeInstallPromptEvent. In practice it’s a mess. What it actually does is defer the installation prompt so you can offer it a more suitable time. But it only works if the browser was going to offer an installation prompt anyway.

When does Chrome on Android decide to offer the installation prompt? It’s a mix of required criteria—a web app manifest, some icons—and an algorithmic spell determined by the user’s engagement.

Other browser makers don’t agree with this arbitrary set of criteria. They quite rightly say that a user should be able to add any website to their home screen if they want to.

What we really need is an installation API: a way to programmatically invoke the add-to-homescreen flow.

Now, I know what you’re going to say. The security and UX implications would be dire. But this should obviously be like geolocation or notifications, only available in secure contexts and gated by user interaction.

Think of it like adding something to the clipboard: it’s something the user can do manually, but the API offers a way to do it programmatically without opening it up to abuse.

(I’d really love it if this API also had a declarative equivalent, much like I want button type="share" for the Web Share API. How about button type="install"?)

People expect this to already exist.

The beforeinstallprompt flow is an absolute mess. Users deserve better.

Space dock

Apple announced some stuff about artificial insemination at their WorldWide Developer Conference, none of which interests me one whit. But we did get a twitch of the webkit curtains to let us know what’s coming in Safari. That does interest me.

I’m really pleased to see that on desktop, websites that have been added to the dock will be able to intercept links for that domain:

Now, when a user clicks a link, if it matches the scope of a web app that the user has added to their Dock, that link will open in the web app instead of their default web browser.

Excellent! This means that if I click on a link to thesession.org from, say, my Mastodon site-in-the-dock, it will open in The Session site-in-the-dock. Make sure you’ve got the scope property set in your web app manifest.

I have a few different sites added to my dock: The Session, Mastodon, Google Calendar. Sure beats the bloat of Electron apps.

I have encountered a small bug. I’ll describe it here because I have no idea where to file it.

It’s to do with Spaces, Apple’s desktop management thingy. Maybe they don’t call it Spaces anymore. Maybe it’s called Mission Control now. Or Stage Manager. I can’t keep track.

Anyway, here are the steps to reproduce:

  1. In Safari on Mac, go to a website like adactio.com
  2. From either the File menu or the share icon, select Add to dock.
  3. Click on the website’s icon in the dock to open it.
  4. Using Apple’s desktop management (Spaces?) available through the F3 key, drag that window to a desktop other than desktop 1.
  5. Right click on the site’s icon in the dock and select Options, then Assign To, then This Desktop.
  6. Quit the app/website.
  7. Return to desktop 1.

Expected behaviour: when I click on the icon in the dock to open the site, it will open in the desktop that it has been assigned to.

Observed behaviour: focus moves to the desktop that the site has been assigned to, but it actually opens in desktop 1.

If someone from Apple is reading, I hope that’s useful.

On the one hand, I hope this isn’t one of those bugs that only I’m experiencing because then I’ll feel foolish. On the other hand, I hope this is one of those bugs that only I’m experiencing because then others don’t have to put up with the buggy behaviour.

CSS Day 2024

My stint as one of the hosts of CSS Day went very well indeed. I enjoyed myself and people seemed to like the cut of my jib.

During the event there was a real buzz on Mastodon, which was heartening to see. I was beginning to worry that hashtagging events was going to be collatoral damage from Elongate, but there was plenty of conference-induced FOMO to be experienced on the fediverse.

The event itself was, as always, excellent. Both in terms of content and organisation.

Some themes emerged during CSS Day, which I always love to see. These emergent properties are partly down to curation and partly down to serendipity.

The last few years of CSS Day have felt like getting a firehose of astonishing new features being added to the language. There was still plenty of cutting-edge stuff this year—masonry! anchor positioning!—but there was also a feeling of consolidation, asking how to get all this amazing new stuff into our workflows.

Matthias’s opening talk on day one and Stephen’s closing talk on the same day complemented one another perfectly. Both managed to inspire while looking into the nitty-gritty practicalities of the web design process.

It was, astoundingly, Matthias’s first ever conference talk. I have no doubt it won’t be the last—it was great!

I gave Stephen a good-natured roast in my introduction, partly because it was his birthday, partly because we’re old friends, but mostly because it was enjoyable for me to watch him squirm. Of course his talk was, as always, superb. Don’t tell him, but he might be one of my favourite speakers.

The topic of graphic design tools came up more than once. It’s interesting to see how the issues with them have changed. It used to be that design tools—Photoshop, Sketch, Figma—were frustrating because they were writing cheques that CSS couldn’t cash. Now the frustration is the exact opposite. Our graphic design tools aren’t capable of the kind of fluid declarative design we can now accomplish in web browsers.

But the biggest rift remains not with tools or technologies, but with people and mindsets. Our tools can reinforce mindsets but the real divide happens in how different people approach CSS.

Both Josh and Kevin get to the heart of this in their tremendous tutorials, and that was reflected in their talks. They showed the difference between having the bare minimum understanding of CSS in order to get something done as quickly as possible, and truly understanding how CSS works in order to open up a world of possibilities.

For people in the first category, Sarah Dayan was there to sing the praises of utility-first CSS AKA atomic CSS. I commend her bravery!

During the Q&A, I restrained myself from being too Paxmanish. But I did have l’esprit d’escalier afterwards when I realised that the entire talk—and all the answers afterwards—depended on two mutually-incompatiable claims:

  1. The great thing about atomic CSS is that it’s a constrained vocabulary so your team has to conform, and
  2. The other great thing about it is that it’s utility-first, not utility-only so you can break out of it and use regular CSS if you want.

Insert .gif of character from The Office looking to camera.

Most of the questions coming in during the Q&A reflected my own take: how about we use utility classes for some things, but not all things. Seems sensible.

Anyway, regardless of what I or anyone else thinks about the substance of what Sarah was saying, there was no denying that it was a great presentation. They were all great presentations. That’s unusual, and I say that as a conference organiser as well as an attendee. Everyone brings their A-game to CSS Day.

Mind you, it is exhausting. I say it every year, but it always feels like one talk too many. Not that any individual talk wasn’t good, but the sheer onslaught of deep dives into the innards of CSS has my brain exploding before the day is done.

A highlight for me was getting to introduce Fantasai’s talk on the design principles of CSS, which was right up my alley. I don’t think most people realise just how much we owe her for her years of work on standards. The web would be in a worse place without the Herculean work she’s done behind the scenes.

Another highlight was getting to see some of the students I met back in March. They were showing some of their excellent work during the breaks. I find what they’re doing just as inspiring as the speakers on stage.

In fact, when I was filling in the post-conference feedback form, there was a question: “Who would you like to see speak at CSS Day next year?” I was racking my brains because everyone I could immediately think of has already spoken at some point. So I wrote, “It would be great to see some of those students speaking about their work.”

I think it would be genuinely fascinating to get their perspective on what we consider modern CSS, which to them is just CSS.

Either way I’ll back next year for sure.

It’s funny, but usually when a conference is described as “inspiring” it’s because it’s tackling big galaxy-brain questions. But CSS Day is as nitty-gritty as it gets and I found it truly inspiring. Like, I couldn’t wait to open up my laptop and start writing some CSS. That kind of inspiring.

Browser support

There was a discussion at Clearleft recently about browser support. Rich has more details but the gist of it is that, even though we were confident that we had a good approach to browser support, we hadn’t written it down anywhere. Time to fix that.

This is something I had been thinking about recently anyway—see my post about Baseline and progressive enhancement—so it didn’t take too long to put together a document explaining our approach.

You can find it at browsersupport.clearleft.com

We’re not just making it public. We’re releasing it under a Creative Commons attribution license. You can copy this browser-support policy verbatim, you can tweak it, you can change it, you can do what you like. As long you include a credit to Clearleft, you’re all set.

I think this browser-support policy makes a lot of sense. It certainly beats trying to browser support to specific browsers or version numbers:

We don’t base our browser support on specific browser names and numbers. Instead, our support policy is based on the capabilities of those browsers.

The more organisations adopt this approach, the better it is for everyone. Hence the liberal licensing.

So next time your boss or your client is asking what your official browser-support policy is, feel free to use browsersupport.clearleft.com

Another speaker for UX London

UX London is just three weeks away! If you haven’t got your ticket yet, dally not.

There’s a last-minute addition to the line-up: Peter Boersma.

Peter is kindly stepping into the slot that Kara Kane was going to be occupying. Alas, since a snap general election was recently announced, Kara isn’t able to give her talk. There’s an abundance of caution in the comms from gov.uk in this pre-election period.

It’s a shame that Kara won’t be able to speak this time around, but it’s great that we’ve got Peter!

Peter’s talk is perfect for day three. Remember, that’s the day focused on design ops and design systems. Well, Peter lives and breathes design ops. He’ll show you why you should maintain a roadmap for design ops, and work with others to get the initiatives on it done.

You can get a ticket for an individual day of talks and workshops, or go for the best-value option and come for all three days. See you there!

Trust

In their rush to cram in “AI” “features”, it seems to me that many companies don’t actually understand why people use their products.

Google is acting as though its greatest asset is its search engine. Same with Bing.

Mozilla Developer Network is acting as though its greatest asset is its documentation. Same with Stack Overflow.

But their greatest asset is actually trust.

If I use a search engine I need to be able to trust that the filtering is good. If I look up documentation I need to trust that the information is good. I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”

“But”, the apologists will respond, “the results are mostly correct! The documentation is mostly true!”

Sure, but as Terence puts it:

The intern who files most things perfectly but has, more than once, tipped an entire cup of coffee into the filing cabinet is going to be remembered as “that klutzy intern we had to fire.”

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

I am honestly astonished that so many companies don’t seem to realise what they’re destroying.

InstAI

If you use Instagram, there may be a message buried in your notifications. It begins:

We’re getting ready to expand our AI at Meta experiences to your region.

Fuck that. Here’s the important bit:

To help bring these experiences to you, we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta. This means that you have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied going forwards.

Follow that link and fill in the form. For the field labelled “Please tell us how this processing impacts you” I wrote:

It’s fucking rude.

That did the trick. I got an email saying:

We’ve reviewed your request and will honor your objection.

Mind you, there’s still this:

We may still process information about you to develop and improve AI at Meta, even if you object or don’t use our products and services.

Speculation rules and fears

After I wrote positively about the speculation rules API I got an email from David Cizek with some legitimate concerns. He said:

I think that this kind of feature is not good, because someone else (web publisher) decides that I (my connection, browser, device) have to do work that very often is not needed. All that blurred by blackbox algorithm in the browser.

That’s fair. My hope is that the user will indeed get more say, whether that’s at the level of the browser or the operating system. I’m thinking of a prefers-reduced-data setting, much like prefers-color-scheme or prefers-reduced-motion.

But this issue isn’t something new with speculation rules. We’ve already got service workers, which allow the site author to unilaterally declare that a bunch of pages should be downloaded.

I’m doing that for Resilient Web Design—when you visit the home page, a service worker downloads the whole site. I can justify that decision to myself because the entire site is still smaller in size than one article from Wired or the New York Times. But still, is it right that I get to make that call?

So I’m very much in favour of browsers acting as true user agents—doing what’s best for the user, even in situations where that conflicts with the wishes of a site owner.

Going back to speculation rules, David asked:

Do we really need this kind of (easily turned to evil) enhancement in the current state of (web) affairs?

That question could be asked of many web technologies.

There’s always going to be a tension with any powerful browser feature. The more power it provides, the more it can be abused. Animations, service workers, speculation rules—these are all things that can be used to improve websites or they can be abused to do things the user never asked for.

Or take the elephant in the room: JavaScript.

Right now, a site owner can link to a JavaScript file that’s tens of megabytes in size, and the browser has no alternative but to download it. I’d love it if users could specify a limit. I’d love it even more if browsers shipped with a default limit, especially if that limit is related to the device and network.

I don’t think speculation rules will be abused nearly as much as client-side JavaScript is already abused.

Speculation rules

There’s a new addition to the latest version of Chrome called speculation rules. This already existed before with a different syntax, but the new version makes more sense to me.

Notice that I called this an addition, not a standard. This is not a web standard, though it may become one in the future. Or it may not. It may wither on the vine and disappear (like most things that come from Google).

The gist of it is that you give the browser one or more URLs that the user is likely to navigate to. The browser can then pre-fetch or even pre-render those links, making that navigation really snappy. It’s a replacement for the abandoned link rel="prerender".

Because this is a unilateral feature, I’m not keen on shipping the code to all browsers. The old version of the API required a script element with a type value of “speculationrules”. That doesn’t do any harm to browsers that don’t support it—it’s a progressive enhancement. But unlike other progressive enhancements, this isn’t something that will just start working in those other browsers one day. I mean, it might. But until this API is an actual web standard, there’s no guarantee.

That’s why I was pleased to see that the new version of the API allows you to use an external JSON file with your list of rules.

I say “rules”, but they’re really more like guidelines. The browser will make its own evaluation based on bandwidth, battery life, and other factors. This feature is more like srcset than source: you give the browser some options, but ultimately you can’t force it to do anything.

I’ve implemented this over on The Session. There’s a JSON file called speculationrules.js with the simplest of suggestions:

{
  "prerender": [{
    "where": {
        "href_matches": "/*"
    },
    "eagerness": "moderate"
  }]
}

The eagerness value of “moderate” says that any link can be pre-rendered if the user hovers over it for 200 milliseconds (the nuclear option would be to use a value of “immediate”).

I still need to point to that JSON file from my HTML. Usually this would be done with something like a link element, but for this particular API, I can send a response header instead:

Speculation-Rules: “/speculationrules.json"

I like that. The response header is being sent to every browser, regardless of whether they support speculation rules or not, but at least it’s just a few bytes. Those other browsers will ignore the header—they won’t download the JSON file.

Here’s the PHP I added to send that header:

header('Speculation-Rules: "/speculationrules.json"');

There’s one extra thing I had to do. The JSON file needs to be served with mime-type of “application/speculationrules+json”. Here’s how I set that up in the .conf file for The Session on Apache:

<IfModule mod_headers.c>
  <FilesMatch "speculationrules.json">
    Header set Content-type application/speculationrules+json
   </FilesMatch>
</IfModule>

A bit of a faff, that.

You can see it in action on The Session. Open up Chrome or Edge (same same but different), fire up the dev tools and keep the network tab open while you navigate around the site. Notice how hovering over a link will trigger a new network request. Clicking on that link will get you that page lickety-split.

Mind you, in the case of The Session, the navigations were already really fast—performance is a feature—so it’s hard to guage how much of a practical difference it makes in this case, but it still seems like a no-brainer to me: taking a few minutes to add this to your site is worth doing.

Oh, there’s one more thing to be aware of when you’re implementing speculation rules. You have the option of excluding URLs from being pre-fetched or pre-rendered. You might need to do this if you’ve got links for adding items to shopping carts, or logging the user out. But my advice would instead be: stop using GET requests for those actions!

Most of the examples given for unsafe speculative loading conditions are textbook cases of when not to use links. Links are for navigating. They’re indempotent. For everthing else, we’ve got forms.

Labels

I love libraries. I think they’re one of humanity’s greatest inventions.

My local library here in Brighton is terrific. It’s well-stocked, it’s got a welcoming atmosphere, and it’s in a great location.

But it has an information architecture problem.

Like most libraries, it’s using the Dewey Decimal system. It’s not a great system, but every classification system is going to have flaws—wherever you draw boundaries, there will be disagreement.

The Dewey Decimal class of 900 is for history and geography. Within that class, those 100 numbers (900 to 999) are further subdivded in groups of 10. For example, everything from 940 to 949 is for the history of Europe.

Dewey Decimal number 941 is for the history of the British Isles. The term “British Isles” is a geographical designation. It’s not a good geographical designation, but technically it’s not a political term. So it’s actually pretty smart to use a geographical rather than a political term for categorisation: geology moves a lot slower than politics.

But the Brighton Library is using the wrong label for their shelves. Everything under 941 is labelled “British History.”

The island of Ireland is part of the British Isles.

The Republic of Ireland is most definitely not part of Britain.

Seeing books about the history of Ireland, including post-colonial history, on a shelf labelled “British History” is …not good. Frankly, it’s offensive.

(I mentioned this situation to an English friend of mine, who said “Well, Ireland was once part of the British Empire”, to which I responded that all the books in the library about India should also be filed under “British History” by that logic.)

Just to be clear, I’m not saying there’s a problem with the library using the Dewey Decimal system. I’m saying they’re technically not using the system. They’ve deviated from the system’s labels by treating “History of the British Isles” and “British History” as synonymous.

I spoke to the library manager. They told me to write an email. I’ve written an email. We’ll see what happens.

You might think I’m being overly pedantic. That’s fair. But the fact this is happening in a library in England adds to the problem. It’s not just technically incorrect, it’s culturally clueless.

Mind you, I have noticed that quite a few English people have a somewhat fuzzy idea about the Republic of Ireland. Like, they understand it’s a different country, but they think it’s a different country in the way that Scotland is a different country, or Wales is a different country. They don’t seem to grasp that Ireland is a different country like France is a different country or Germany is a different country.

It would be charming if not for, y’know, those centuries of subjugation, exploitation, and forced starvation.

British history.

Update: They fixed it!

Baseline progressive enhancement

Support for view transitions for regular websites (as opposed to single-page apps) will ship in Chrome 126. As someone who’s a big fan—to put it mildly—I am very happy about this!

Hopefully Firefox and Safari won’t be too far behind. But it’s still worth adding view transitions to your website even if not every browser supports them. They’re the perfect example of a progressive enhancement.

The browsers that don’t yet support view transitions won’t be harmed in any way if you give them the CSS for view transitions. They’ll just ignore it. For users of those browsers, nothing changes.

Then when those browsers do ship support for view transitions, your website automatically gets an upgrade for those users. Code you’ve already written starts working from one day to the next.

Don’t wait, is what I’m saying.

I really like the Baseline initiative as a way to track browser support. It’s great to see it in use on MDN and Can I Use. It’s very handy having a glanceable indication of which browser features are newly available and which are widely available.

But…

Not all browser features work the same way. For features that work as progressive enhancements you don’t need to wait for them to be widely available.

Service workers. Preference queries. View transitions.

If a browser doesn’t support one of those features, that’s fine. Your website won’t break in that browser.

Now that’s not true of all browser features, particularly some JavaScript APIs. If a feature is critical for your site to function then you definitely want to wait until it’s widely supported.

Baseline won’t tell you the difference between those two different kinds of features.

I don’t want Baseline to get too complicated. Like I said, I really like how it’s nice and glanceable right now. But it would be nice if there way some indication that a newly-available feature is a progressive enhancement.

For now it’s up to us to make that distinction. So don’t fall into the trap of thinking that just because a feature isn’t listed as widely-available you can’t use it yet.

Really you want to ask two questions:

  1. How widely available is this feature?
  2. Can this feature be used as a progressive enhancement?

If Baseline tells you that the answer to the first question is “newly-available”, move on to the second question. If the answer to that is “no, it can’t be used as a progressive enhancement”, don’t ship that feature in production just yet.

But if the answer to that second question is “hell yeah, it’s a progressive enhancement!” then go for it, regardless of the answer to the first question.

Y’know, there’s a real irony in a common misunderstanding around progressive enhancement: some people seem to think it’s about not being able to use advanced browser features. In reality it’s the opposite. Progressive enhancement allows you to use advanced browser features even before they’re widely supported.

Responsibility

My colleague Chris has written a terrific post over on the Clearleft blog: Is the planet the missing member of your project team?

Rather than hand-wringing and finger-wagging, it gets down to some practical steps that you—we—can take on every project.

Chris finishes by asking:

Let me know how you design with the environment in mind. What practical advice would you suggest?

Well, here’s something that I keep coming up against…

Chris shows that the environment can be part of project management, specifically the RACI methodology:

We list who is responsible, accountable, consulted, and informed within the project. It’s a simple exercise but the clarity is useful for identifying what expertise and input we should seek from the named individuals.

Having the planet be a proactive partner in your project ensures its needs are considered.

Whenever responsibilities are being assigned there are some things that inevitably fall through the cracks. One I’ve seen over and over again is responsibility for third-party scripts.

On the face of it this seems like another responsibility for developers. We’re talking about code here, right?

But in my experience it is never the developers adding “beacons” and other third-party embedded scripts.

Chris rightly points out:

Development decisions, visual design choices, content approach, and product strategy all contribute to the environmental impact of your website.

But what about sales and marketing? Often they’re the ones who’ll drop in a third-party script to track user journeys. That’s understandable. That’s kind of their job.

Dropping in one line of JavaScript seems like a victimless crime. It’s just one small script, right? But JavaScript can import more JavaScript. Tools like Request Map Generator can show just how much destruction third-party JavaScript can wreak:

You pop in a URL, it fetches the page and maps out all the subsequent requests in a nifty interactive diagram of circles, showing how many requests third-party scripts are themselves generating. I’ve found it to be a very effective way of showing the impact of third-party scripts to people who aren’t interested in looking at waterfall diagrams.

Just to be clear, the people adding third-party scripts to websites usually aren’t doing so maliciously. They often don’t realise the negative effect the scripts will have on performance and the environment.

As is so often the case, this isn’t a technical problem. At root it’s about understanding people’s needs (like “I need a way to see what pages are converting!”) and finding a way to meet those needs without negatively impacting the planet. A good open-minded discussion can go a long way.

So I echo Chris’s call to think about environmental impacts from the very start of a project. Establish early on who will have the ability to add third-party scripts to the site. Do all of those people understand the responsibility that gives them?

I saw this lack of foresight in action on a project recently. The front-end development was going really well and the site was going to be exceptionally performant: green Lighthouse scores across the board. But when the site went live it had tracking scripts. That meant that users needed to consent to being tracked. That meant adding another third-party script to generate a consent banner. It completely tanked the Lighthouse scores.

I’m sure the people who added the tracking scripts and consent banners thought they had no choice. But there are alternatives. There are ways to get the data you need without the intrusive surveillance and performance-wrecking JavaScript.

The problem is that it’s not the norm. “Everyone else is doing it” was the justification for Flash intros two decades ago and it’s the justification for enshittification via third-party scripts now.

It doesn’t have to be this way.

Our web

Gregory Bennett chronicles the enshittification of everything online in his piece Heat Death of the Internet. It makes for grim reading.

There’s a note of hope at the end. It’s the same note of hope that Charles Digges amplifies in his great piece, Viva la Library!:

Rebel against The Algorithm. Get a library card.

Molly White has also chronicled the decline of everything good on the web, but her piece has hope threaded throughout. We can have a different web:

Though we now face a new challenge as the dominance of the massive walled gardens has become overwhelming, we have tools in our arsenal: the memories of once was, and the creativity of far more people than ever before, who entered the digital expanse but have grown disillusioned with the business moguls controlling life within the walls.

And if anything, it is easier now to do all of this than it ever was.

Like I’ve repeatedly said, having your own website has gone being something uncontroversial to being downright transgressive.

Still, the barrier to entry remains too high for my liking. I wish more smart minds were working on making publishing on the web easier instead of just working on getting people to consume.

But even if you don’t have your own website, Andrew Stephens says you can still Save the Web by Being Nice:

The very best thing to keep the web partly alive is to maintain some content yourself - start a blog, join a forum and contribute to the conversation, even podcast if that is your thing. But that takes a lot of time and not everyone has the energy or the knowhow to create like this.

The second best thing to do is to show your support for pages you enjoy by being nice and making a slight effort.

To paraphrase Shakespeare, being nice “is twice blest; It blesseth him that gives and him that takes.” Tell someone that you liked something they put on the web. You’ll feel good. They’ll feel even better.