Journal tags: v

1144

Ad tech

Back when South by Southwest wasn’t terrible, there used to be an annual panel called Browser Wars populated with representatives from the main browser vendors (except for Apple, obviously, who would never venture onto a stage outside of their own events).

I remember getting into a heated debate with the panelists during the 2010 edition. I was mad about web fonts.

Just to set the scene, web fonts didn’t exist back in 2010. That’s what I was mad about.

There was no technical reason why we couldn’t have web fonts. The reason why we didn’t get web fonts for years and years was because browser makers were concerned about piracy and type foundries.

That’s nice and all, but as I said during that panel, I don’t recall any such concerns being raised for photographers when the img element was shipped. Neither was the original text-only web held back by the legimate fear by writers of plagiarism.

My point was not that these concerns weren’t important, but that it wasn’t the job of web browsers to shore up existing business models. To use standards-speak, these concerns are orthogonal.

I’m reminded of this when I see browser makers shoring up the business of behavioural advertising.

I subscribe to the RSS feed of updates to Chrome. Not all of it is necessarily interesting to me, but all of it is supposedly aimed at developers. And yet, in amongst the posts about APIs and features, there’ll be something about the Orwellianly-titled “privacy sandbox”.

This is only of interest to one specific industry: behavioural online advertising driven by surveillance and tracking. I don’t see any similar efforts being made for teachers, cooks, architects, doctors or lawyers.

It’s a ludicrous situation that I put down to the fact that Google, the company that makes Chrome, is also the company that makes its money from targeted advertising.

But then Mozilla started with the same shit.

Now, it’s one thing to roll out a new so-called “feature” to benefit behavioural advertising. It’s quite another to make it enabled by default. That’s a piece of deceptive design that has no place in Firefox. Defaults matter. Browser makers know this. It’s no accident that this “feature” was enabled by default.

This disgusts me.

It disgusts me all the more that it’s all for nothing. Notice that I’ve repeatly referred to behavioural advertising. That’s the kind that relies on tracking and surveillance to work.

There is another kind of advertising. Contextual advertising is when you show an advertisement related to the content of the page the user is currently on. The advertiser doesn’t need to know anything about the user, just the topic of the page.

Conventional wisdom has it that behavioural advertising is much more effective than contextual advertising. After all, why would there be such a huge industry built on tracking and surveillance if it didn’t work? See, for example, this footnote by John Gruber:

So if contextual ads generate, say, one-tenth the revenue of targeted ads, Meta could show 10 times as many ads to users who opt out of targeting. I don’t think 10× is an outlandish multiplier there — given how remarkably profitable Meta’s advertising business is, it might even need to be higher than that.

Seems obvious, right?

But the idea that behavioural advertising works better than contextual advertising has no basis in reality.

If you think you know otherwise, Jon Bradshaw would like to hear from you:

Bradshaw challenges industry to provide proof that data-driven targeting actually makes advertising more effective – or in fact makes it worse. He’s spoiling for a debate – and has three deep, recent studies that show: broad reach beats targeting for incremental growth; that the cost of targeting outweighs the return; and that second and third party data does not outperform a random sample. First party data does beat the random sample – but contextual ads massively outperform even first party data. And they are much, much cheaper. Now, says Bradshaw, let’s see some counter-evidence from those making a killing.

If targeted advertising is going to get preferential treatment from browser makers, I too would like to see some evidence that it actually works.

Further reading:

Directory enquiries

I was having a discussion with some of my peers a little while back. We were collectively commenting on the state of education and documentation for front-end development.

A lot of the old stalwarts have fallen by the wayside of late. CSS Tricks hasn’t been the same since it got bought out by Digital Ocean. A List Apart goes through fallow periods. Even the Mozilla Developer Network is looking to squander its trust by adding inaccurate “content” generated by a large language model.

The most obvious solution is to start up a brand new resource for front-end developers. But there are two probems with that:

  1. It’s really, really, really hard work, and
  2. It feels a bit 927.

I actually think there are plenty of good articles and resources on front-end development being published. But they’re not being published in any one specific place. People are publishing them on their own websites.

Ahmed, Josh, Stephanie, Andy, Lea, Rachel, Robin, Michelle …I could go on, but you get the picture.

All this wonderful stuff is distributed across the web. If you have a well-stocked RSS reader, you’re all set. But if you’re new to front-end development, how do you know where to find this stuff? I don’t think you can rely on search, unless you have a taste for slop.

I think the solution lies not with some hand-wavey “AI” algorithm that burns a forest for every query. I think the solution lies with human curation.

I take inspiration from Phil’s fantastic project, ooh.directory. Imagine taking that idea of categorisation and applying it to front-end dev resources.

Whether it’s a post on web.dev, Smashing Magazine, or someone’s personal site, it could be included and categorised appropriately.

Now, there would still be a lot of work involved, especially in listing and categorising the articles that are already out there, but it wouldn’t be nearly as much work as trying to create those articles from scratch.

I don’t know what the categories should be. Does it make sense to have top-level categories for HTML, CSS, and JavaScript, with sub-directories within them? Or does it make more sense to categorise by topics like accessibility, animation, and so on?

And this being the web, there’s no reason why one article couldn’t be tagged to simultaneously live in multiple categories.

There’s plenty of meaty information architecture work to be done. And there’d be no shortage of ongoing work to handle new submissions.

A stretch goal could be the creation of “playlists” of hand-picked articles. “Want to get started with CSS grid layout? Read that article over there, watch this YouTube video, and study this page on MDN.”

What do you think? Does this one-stop shop of hyperlinks sound like it would be useful? Does it sound feasible?

I’m just throwing this out there. I’d love it if someone were to run with it.

Teaching and learning

Looking back on ten years of codebar Brighton, I’m remembering how much I got out of being a coach.

Something that I realised very quickly is that there is no one-size-fits-all approach to coaching. Every student is different so every session should adapt to that.

Broadly speaking I saw two kinds of students: those that wanted to get results on screen as soon as possible without worrying about the specifics, and those who wanted to know why something was happening and how it worked. In the first instance, you get to a result as quickly as possible and then try to work backwards to figure out what’s going on. In the second instance, you build up the groundwork of knowledge and then apply it to get results.

Both are equally valid approaches. The only “wrong” approach as a coach is to try to apply one method to someone who’d rather learn the other way.

Personally, I always enjoyed the groundwork-laying of the second approach. But it comes with challenges. Because the results aren’t yet visible, you have to do extra work to convey why the theory matters. As a coach, you need to express infectious enthusiasm.

Think about the best teachers you had in school. I’m betting they displayed infectious enthusiasm for the subject matter.

The other evergreen piece of advice is to show, don’t tell. Or at the very least, intersperse your telling with plenty of showing.

Bret Viktor demonstrates this when he demonstrates scientific communication as sequential art:

This page presents a scientific paper that has been redesigned as a sequence of illustrations with captions. This comic-like format, with tightly-coupled pictures and prose, allows the author to depict and describe simultaneously — show and tell.

It works remarkably well. I remember how well it worked when Google first launched their Chrome web browser. They released a 40 page comic book illustrated by Scott McCloud. There is no way I would’ve read a document that long about how browser engines work, but I read that comic cover to cover.

This visual introduction to machine learning is another great example of simultaneous showing and telling.

So showing augments telling. But interactivity can augment showing.

Here are some great examples of interactive explainers:

Lea describes what can happen when too much theory comes before practice:

Observing my daughter’s second ever piano lesson today made me realize how this principle extends to education and most other kinds of knowledge transfer (writing, presentations, etc.). Her (generally wonderful) teacher spent 40 minutes teaching her notation, longer and shorter notes, practicing drawing clefs, etc. Despite his playful demeanor and her general interest in the subject, she was clearly distracted by the end of it.

It’s easy to dismiss this as a 5 year old’s short attention span, but I could tell what was going on: she did not understand why these were useful, nor how they connect to her end goal, which is to play music.

The codebar website has some excellent advice for coaches, like:

  • Do not take over the keyboard! This can be off-putting and scary.
  • Encourage the students to type and not copy paste.
  • Explain that there are no bad questions.
  • Explain to students that it’s OK to make mistakes.
  • Assume that anyone you’re teaching has no knowledge but infinite intelligence.

Notice how so much of the advice focuses on getting the students to do things, rather than have them passively sit and absorb what the coach has to say.

Lea also gives some great advice:

  1. Always explain why something is useful. Yes, even when it’s obvious to you.
  2. Minimize the amount of knowledge you convey before the next opportunity to practice it. For non-interactive forms of knowledge transfer (e.g. a book), this may mean showing an example, whereas for interactive ones it could mean giving the student a small exercise or task.
  3. Prefer explaining in context rather than explaining upfront.

It’s interesting that Lea highlights the advantage of interactive media like websites over inert media like books. The canonical fictional example of an interactive explainer is the Young Lady’s Illustrated Primer in Neal Stephenson’s novel The Diamond Age. Andy Matuschak describes its appeal:

When it wants to introduce a conceptual topic, it begins with concrete hands-on projects: Turing machines, microeconomics, and mitosis are presented through binary-coding iron chains, the cipher’s market, and Nell’s carrot garden. Then the Primer introduces extra explanation just-in-time, as necessary.

That’s not how learning usually works in these domains. Abstract topics often demand that we start with some necessary theoretical background; only then can we deeply engage with examples and applications. With the Primer, though, Nell consistently begins each concept by exploring concrete instances with real meaning to her. Then, once she’s built a personal connection and some intuition, she moves into abstraction, developing a fuller theoretical grasp through the Primer’s embedded books.

(Andy goes on to warn of the dangers of copying the Primer too closely. Its tricks verge on gamification, and its ultimate purpose isn’t purely to educate. There’s a cautionary tale there about the power dynamics in any teacher/student relationship.)

There’s kind of a priority of constituencies when it comes to teaching:

Consider interactivity over showing over telling.

Thinking back on all the talks I’ve given, I start to wonder if I’ve been doing too much telling and showing, but not nearly enough interacting.

Then again, I think that talks aren’t quite the same as hands-on workshops. I think of giving a talk as being more like a documentarian. You need to craft a compelling narrative, and illustrate what you’re saying as much as possible, but it’s not necessarily the right arena for interactivity.

That’s partly a matter of scale. It’s hard to be interactive with every person in a large audience. Marcin managed to do it but that’s very much the exception.

Workshops are a different matter though. When I’m recruiting hosts for UX London workshops I always encourage them to be as hands-on as possible. A workshop should not be an extended talk. There should be more exercises than talking. And wherever possible those exercises should be tactile, ideally not sitting in front of a computer.

My own approach to workshops has changed over the years. I used to prepare a book’s worth of material to have on hand, either as one giant slide deck or multiple decks. But I began to realise that the best workshops are the ones where the attendees guide the flow, not me.

So now I show up to a full-day workshop with no slides. But I’m not unprepared. I’ve got decades of experience (and links) to apply during the course of the day. It’s just that instead of trying to anticipate which bits of knowledge I’m going to need to convey, I apply them in a just-in-time manner as and when they’re needed. It’s kind of scary, but as long as there’s a whiteboard to hand, or some other way to illustrate what I’m telling, it works out great.

Codebar Brighton

I went to codebar Brighton yesterday evening. I hadn’t been in quite a while, but this was a special occasion: a celebration of codebar Brighton’s tenth anniversary!

The Brighton chapter of codebar was the second one ever, founded six months after the initial London chapter. There are now 33 chapters all around the world.

Clearleft played host to that first ever codebar in Brighton. We had already been hosting local meetups like Async in our downstairs event space, so we were up for it when Rosa, Dot, and Ryan asked about having codebar happen there.

In fact, the first three Brighton codebars were all at 68 Middle Street. Then other places agreed to play host and it moved to a rota system, with the Clearleft HQ as just one of the many Brighton venues.

With ten years of perspective, it’s quite amazing to see how many people went from learning to code in the evenings, to getting jobs in web development, and becoming codebar coaches themselves. It’s a really wonderful community.

Over the years the baton of organising codebar has been passed on to a succession of fantastic people. These people are my heroes.

It worked out well for Clearleft too. Thanks to codebar, we hired Charlotte. Later we hired Cassie. And it was thanks to codebar that I first met Amber.

Codebar Brighton has been very, very good to me. Here’s to the next ten years!

That was UX London 2024

UX London 2024 is done …and it was magnificent!

It’s always weird when an event like this moves from being something in the future to something in the past. I’ve spent the year so far fixated on getting the right line-up, getting the word out, and nervously watching the ticket sales (for some reason a lot of people left it to pretty late in the day to secure their spots—not good for my heart!). For months, then weeks, then days, this thing was coming towards me. Then it was done. Now it’s behind me. It feels strange.

I’ve spent the past few days decompressing and thinking back on the event. My initial impression of it has solidified with the addition of some rumination—it was really, really good! The best yet.

I wish I could take the credit for that, but it was all down to the fantastic speakers and my wonderful colleagues who kept things moving flawlessly. All I had to do was get up and stage and introduce the speakers. Easy peasy.

I will say that I am very proud of the line-up I put together. I had a nice mix of well-known voices alongside newcomers.

With some of the speakers, I knew that they’d deliver the goods. I didn’t spend any time fretting over whether people like Emma Boulton, Tom Kerwin or Ben Sauer would be great. I never asked myself whether Brad Frost would have valuable insights into design systems. I mean, does the pope shit in the woods?

But what really blew me away were the people I didn’t know. I hadn’t even met Clarissa Gardner or Benaz Irani before UX London. They’re not exactly fixtures on the conference circuit …yet. They should be. Seriously, I go to a lot of events, and I see a lot of talks, so I don’t offer my praise lightly. Their talks were great!

There were numerous times during UX London 2024 when I thought “More people need to see this!” More people need to see Benaz’s superb talk on the designer alter-ego. More people need to see John’s superb presentation—he put a ton of work into it and it really paid off.

And everyone needs to hear Harry’s blistering call-to-arms. His presentation was brilliant and much-needed. Oh, captain, my captain!

Oh, and needless to say, the closing keynotes on each day were just perfect. Rama, Matt, and Maggie bestowed so much great brain food, it was almost like a mini dConstruct.

I’m so grateful to all the speakers for really bringing their A game. I’m grateful to all my colleagues, especially Louise, who did all the hard work behind the scenes. And I’m really grateful to everyone who came and enjoyed UX London 2024.

Thank you.

Web App install API

My bug report on Apple’s websites-in-the-dock feature on desktop has me thinking about how starkly different it is on mobile.

On iOS if you want to add a website to your home screen, good luck. The option is buried within the “share” menu.

First off, it makes no sense that adding something to your homescreen counts as sharing. Secondly, how is anybody supposed to know that unless they’re explicitly told.

It’s a similar situation on Android. In theory you can prompt the user to install a progressive web app using the botched BeforeInstallPromptEvent. In practice it’s a mess. What it actually does is defer the installation prompt so you can offer it a more suitable time. But it only works if the browser was going to offer an installation prompt anyway.

When does Chrome on Android decide to offer the installation prompt? It’s a mix of required criteria—a web app manifest, some icons—and an algorithmic spell determined by the user’s engagement.

Other browser makers don’t agree with this arbitrary set of criteria. They quite rightly say that a user should be able to add any website to their home screen if they want to.

What we really need is an installation API: a way to programmatically invoke the add-to-homescreen flow.

Now, I know what you’re going to say. The security and UX implications would be dire. But this should obviously be like geolocation or notifications, only available in secure contexts and gated by user interaction.

Think of it like adding something to the clipboard: it’s something the user can do manually, but the API offers a way to do it programmatically without opening it up to abuse.

(I’d really love it if this API also had a declarative equivalent, much like I want button type="share" for the Web Share API. How about button type="install"?)

People expect this to already exist.

The beforeinstallprompt flow is an absolute mess. Users deserve better.

Space dock

Apple announced some stuff about artificial insemination at their WorldWide Developer Conference, none of which interests me one whit. But we did get a twitch of the webkit curtains to let us know what’s coming in Safari. That does interest me.

I’m really pleased to see that on desktop, websites that have been added to the dock will be able to intercept links for that domain:

Now, when a user clicks a link, if it matches the scope of a web app that the user has added to their Dock, that link will open in the web app instead of their default web browser.

Excellent! This means that if I click on a link to thesession.org from, say, my Mastodon site-in-the-dock, it will open in The Session site-in-the-dock. Make sure you’ve got the scope property set in your web app manifest.

I have a few different sites added to my dock: The Session, Mastodon, Google Calendar. Sure beats the bloat of Electron apps.

I have encountered a small bug. I’ll describe it here because I have no idea where to file it.

It’s to do with Spaces, Apple’s desktop management thingy. Maybe they don’t call it Spaces anymore. Maybe it’s called Mission Control now. Or Stage Manager. I can’t keep track.

Anyway, here are the steps to reproduce:

  1. In Safari on Mac, go to a website like adactio.com
  2. From either the File menu or the share icon, select Add to dock.
  3. Click on the website’s icon in the dock to open it.
  4. Using Apple’s desktop management (Spaces?) available through the F3 key, drag that window to a desktop other than desktop 1.
  5. Right click on the site’s icon in the dock and select Options, then Assign To, then This Desktop.
  6. Quit the app/website.
  7. Return to desktop 1.

Expected behaviour: when I click on the icon in the dock to open the site, it will open in the desktop that it has been assigned to.

Observed behaviour: focus moves to the desktop that the site has been assigned to, but it actually opens in desktop 1.

If someone from Apple is reading, I hope that’s useful.

On the one hand, I hope this isn’t one of those bugs that only I’m experiencing because then I’ll feel foolish. On the other hand, I hope this is one of those bugs that only I’m experiencing because then others don’t have to put up with the buggy behaviour.

CSS Day 2024

My stint as one of the hosts of CSS Day went very well indeed. I enjoyed myself and people seemed to like the cut of my jib.

During the event there was a real buzz on Mastodon, which was heartening to see. I was beginning to worry that hashtagging events was going to be collatoral damage from Elongate, but there was plenty of conference-induced FOMO to be experienced on the fediverse.

The event itself was, as always, excellent. Both in terms of content and organisation.

Some themes emerged during CSS Day, which I always love to see. These emergent properties are partly down to curation and partly down to serendipity.

The last few years of CSS Day have felt like getting a firehose of astonishing new features being added to the language. There was still plenty of cutting-edge stuff this year—masonry! anchor positioning!—but there was also a feeling of consolidation, asking how to get all this amazing new stuff into our workflows.

Matthias’s opening talk on day one and Stephen’s closing talk on the same day complemented one another perfectly. Both managed to inspire while looking into the nitty-gritty practicalities of the web design process.

It was, astoundingly, Matthias’s first ever conference talk. I have no doubt it won’t be the last—it was great!

I gave Stephen a good-natured roast in my introduction, partly because it was his birthday, partly because we’re old friends, but mostly because it was enjoyable for me to watch him squirm. Of course his talk was, as always, superb. Don’t tell him, but he might be one of my favourite speakers.

The topic of graphic design tools came up more than once. It’s interesting to see how the issues with them have changed. It used to be that design tools—Photoshop, Sketch, Figma—were frustrating because they were writing cheques that CSS couldn’t cash. Now the frustration is the exact opposite. Our graphic design tools aren’t capable of the kind of fluid declarative design we can now accomplish in web browsers.

But the biggest rift remains not with tools or technologies, but with people and mindsets. Our tools can reinforce mindsets but the real divide happens in how different people approach CSS.

Both Josh and Kevin get to the heart of this in their tremendous tutorials, and that was reflected in their talks. They showed the difference between having the bare minimum understanding of CSS in order to get something done as quickly as possible, and truly understanding how CSS works in order to open up a world of possibilities.

For people in the first category, Sarah Dayan was there to sing the praises of utility-first CSS AKA atomic CSS. I commend her bravery!

During the Q&A, I restrained myself from being too Paxmanish. But I did have l’esprit d’escalier afterwards when I realised that the entire talk—and all the answers afterwards—depended on two mutually-incompatiable claims:

  1. The great thing about atomic CSS is that it’s a constrained vocabulary so your team has to conform, and
  2. The other great thing about it is that it’s utility-first, not utility-only so you can break out of it and use regular CSS if you want.

Insert .gif of character from The Office looking to camera.

Most of the questions coming in during the Q&A reflected my own take: how about we use utility classes for some things, but not all things. Seems sensible.

Anyway, regardless of what I or anyone else thinks about the substance of what Sarah was saying, there was no denying that it was a great presentation. They were all great presentations. That’s unusual, and I say that as a conference organiser as well as an attendee. Everyone brings their A-game to CSS Day.

Mind you, it is exhausting. I say it every year, but it always feels like one talk too many. Not that any individual talk wasn’t good, but the sheer onslaught of deep dives into the innards of CSS has my brain exploding before the day is done.

A highlight for me was getting to introduce Fantasai’s talk on the design principles of CSS, which was right up my alley. I don’t think most people realise just how much we owe her for her years of work on standards. The web would be in a worse place without the Herculean work she’s done behind the scenes.

Another highlight was getting to see some of the students I met back in March. They were showing some of their excellent work during the breaks. I find what they’re doing just as inspiring as the speakers on stage.

In fact, when I was filling in the post-conference feedback form, there was a question: “Who would you like to see speak at CSS Day next year?” I was racking my brains because everyone I could immediately think of has already spoken at some point. So I wrote, “It would be great to see some of those students speaking about their work.”

I think it would be genuinely fascinating to get their perspective on what we consider modern CSS, which to them is just CSS.

Either way I’ll back next year for sure.

It’s funny, but usually when a conference is described as “inspiring” it’s because it’s tackling big galaxy-brain questions. But CSS Day is as nitty-gritty as it gets and I found it truly inspiring. Like, I couldn’t wait to open up my laptop and start writing some CSS. That kind of inspiring.

Hosting

I haven’t spoken at any conferences so far this year, and I don’t have any upcoming talks. That feels weird. I’m getting kind of antsy to give a talk.

I suspect my next talk will have something to do with HTML web components. If you’re organising an event and that sounds interesting to you, give me a shout.

But even though I’m not giving a conference talk this year, I’m doing a fair bit of hosting. There was the lovely Patterns Day back in March. And this week I’m off to Amsterdam to be one of the hosts of CSS Day. As always, I’m very much looking forward to that event.

Once that’s done, it’ll be time for the biggie. UX London is just two weeks away—squee!

There are still tickets available. If you haven’t got yours yet, I highly recommend getting it before midnight on Friday—that’s when the regular pricing ends. After that, it’ll be last-chance passes only.

Browser support

There was a discussion at Clearleft recently about browser support. Rich has more details but the gist of it is that, even though we were confident that we had a good approach to browser support, we hadn’t written it down anywhere. Time to fix that.

This is something I had been thinking about recently anyway—see my post about Baseline and progressive enhancement—so it didn’t take too long to put together a document explaining our approach.

You can find it at browsersupport.clearleft.com

We’re not just making it public. We’re releasing it under a Creative Commons attribution license. You can copy this browser-support policy verbatim, you can tweak it, you can change it, you can do what you like. As long you include a credit to Clearleft, you’re all set.

I think this browser-support policy makes a lot of sense. It certainly beats trying to browser support to specific browsers or version numbers:

We don’t base our browser support on specific browser names and numbers. Instead, our support policy is based on the capabilities of those browsers.

The more organisations adopt this approach, the better it is for everyone. Hence the liberal licensing.

So next time your boss or your client is asking what your official browser-support policy is, feel free to use browsersupport.clearleft.com

Reading patterns

I can’t resist bookshelves.

If I’m shown into someone’s home and left alone while the host goes and does something, you can bet I’m going to peruse their bookshelves.

I don’t know why. Maybe I’m looking for points of commonality. “Oh, you read this book too?” Or maybe it’s the opposite, and I’m looking for something new and—pardon the pun—novel. “Oh, that book sounds interesting!”

I like it when people have a kind of bookshelf on their website.

Here’s my ad-hoc bookshelf, which is manually mirrored on Bookshop.org.

I like having an overview of what I’ve been reading. One of the reasons I started keeping track was so that I could try to have a nice balance of fiction and non-fiction.

I always get a little sad when I see someone’s online reading list and it only consists of non-fiction books that are deemed somehow useful, rather than simply pleasurable. Cameron wrote about when he used to do this:

I felt like reading needed to have some clear purpose. The topic of any book I read needed to directly contribute to being an “adult”. So I started to read non-fiction – stuff that reflected what was happening in my career. Books on management, books on finance, books on economics, books on the creative process. And for many years this diet of dry, literary roughage felt wholesome … but joyless. Each passage I highlited in yellow marker was a point for the scoreboard, not a memory to be treasured.

Now he’s redressing the imbalance and rediscovering the unique joy of entering other worlds by reading fiction.

For a while, I forced myself to have perfect balance. I didn’t allow myself to read two non-fiction books in a row or two fiction books in a row. I made myself alternate between the two.

I’ve let that lapse now. I’m reading more fiction than non-fiction, and I’m okay with that.

But I still like looking back on what I’ve been reading and seeing patterns emerge. Like there’s a clear boom in the last year of reading retellings of the Homeric epics (all kicked off by reading the brilliant Circe by Madeline Miller).

I also keep an eye out for a different kind of imbalance. I want to make sure that I’m not just reading books by people like me—middle-aged heterosexual white dudes.

You may think that a balanced reading diet would emerge naturally, but I’m not so sure. Here’s an online bookshelf. Here’s another online bookshelf. Plenty of good stuff in both. But do either of those men realise that they’ve gone more than a year without reading a book written by a woman?

It’s almost certainly not a conscious decision. It’s just that in the society we live in, the default mode tends towards what’s been historically privileged (see also films, music, and painting).

It’s like driving in a car that subtly pulls to one side. Unless you compensate for it, you’ll end up in the ditch without even realising.

I feel bad for calling attention to those two reading lists. It feels very judgy of me. And reading is something that doesn’t deserve judgement. Any reading is good reading.

Mind you, maybe being judgemental is exactly why I’m drawn to people’s bookshelves in the first place. It might be that I’m subsconsciously looking for compatibility signals. If I see a bookshelf filled with books I like, I’m bound to feel predisposed to like that person. And if I see a bookshelf dominated by Jordan Peterson and Ayn Rand, I’m probably going to pass judgement on the reader, even if I know I shouldn’t.

Applying the four principles of accessibility

Web Content Accessibility Guidelines—or WCAG—looks very daunting. It’s a lot to take in. It’s kind of overwhelming. It’s hard to know where to start.

I recommend taking a deep breath and focusing on the four principles of accessibility. Together they spell out the cutesy acronym POUR:

  1. Perceivable
  2. Operable
  3. Understandable
  4. Robust

A lot of work has gone into distilling WCAG down to these four guidelines. Here’s how I apply them in my work…

Perceivable

I interpret this as:

Content will be legible, regardless of how it is accessed.

For example:

  • The contrast between background and foreground colours will meet the ratios defined in WCAG 2.
  • Content will be grouped into semantically-sensible HTML regions such as navigation, main, footer, etc.

Operable

I interpret this as:

Core functionality will be available, regardless of how it is accessed.

For example:

  • I will ensure that interactive controls such as links and form inputs will be navigable with a keyboard.
  • Every form control will be labelled, ideally with a visible label.

Understandable

I interpret this as:

Content will make sense, regardless of how it is accessed.

For example:

  • Images will have meaningful alternative text.
  • I will make sensible use of heading levels.

This is where it starts to get quite collaboritive. Working at an agency, there will some parts of website creation and maintenance that will require ongoing accessibility knowledge even when our work is finished.

For example:

  • Images uploaded through a content management system will need sensible alternative text.
  • Articles uploaded through a content management system will need sensible heading levels.

Robust

I interpret this as:

Content and core functionality will still work, regardless of how it is accessed.

For example:

  • Drop-down controls will use the HTML select element rather than a more fragile imitation.
  • I will only use JavaScript to provide functionality that isn’t possible with HTML and CSS alone.

If you’re applying a mindset of progressive enhancement, this part comes for you. If you take a different approach, you’re going to have a bad time.

Taken together, these four guidelines will get you very far without having to dive too deeply into the rest of WCAG.

Another speaker for UX London

UX London is just three weeks away! If you haven’t got your ticket yet, dally not.

There’s a last-minute addition to the line-up: Peter Boersma.

Peter is kindly stepping into the slot that Kara Kane was going to be occupying. Alas, since a snap general election was recently announced, Kara isn’t able to give her talk. There’s an abundance of caution in the comms from gov.uk in this pre-election period.

It’s a shame that Kara won’t be able to speak this time around, but it’s great that we’ve got Peter!

Peter’s talk is perfect for day three. Remember, that’s the day focused on design ops and design systems. Well, Peter lives and breathes design ops. He’ll show you why you should maintain a roadmap for design ops, and work with others to get the initiatives on it done.

You can get a ticket for an individual day of talks and workshops, or go for the best-value option and come for all three days. See you there!

Trust

In their rush to cram in “AI” “features”, it seems to me that many companies don’t actually understand why people use their products.

Google is acting as though its greatest asset is its search engine. Same with Bing.

Mozilla Developer Network is acting as though its greatest asset is its documentation. Same with Stack Overflow.

But their greatest asset is actually trust.

If I use a search engine I need to be able to trust that the filtering is good. If I look up documentation I need to trust that the information is good. I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”

“But”, the apologists will respond, “the results are mostly correct! The documentation is mostly true!”

Sure, but as Terence puts it:

The intern who files most things perfectly but has, more than once, tipped an entire cup of coffee into the filing cabinet is going to be remembered as “that klutzy intern we had to fire.”

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

I am honestly astonished that so many companies don’t seem to realise what they’re destroying.

InstAI

If you use Instagram, there may be a message buried in your notifications. It begins:

We’re getting ready to expand our AI at Meta experiences to your region.

Fuck that. Here’s the important bit:

To help bring these experiences to you, we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta. This means that you have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied going forwards.

Follow that link and fill in the form. For the field labelled “Please tell us how this processing impacts you” I wrote:

It’s fucking rude.

That did the trick. I got an email saying:

We’ve reviewed your request and will honor your objection.

Mind you, there’s still this:

We may still process information about you to develop and improve AI at Meta, even if you object or don’t use our products and services.

Speculation rules and fears

After I wrote positively about the speculation rules API I got an email from David Cizek with some legitimate concerns. He said:

I think that this kind of feature is not good, because someone else (web publisher) decides that I (my connection, browser, device) have to do work that very often is not needed. All that blurred by blackbox algorithm in the browser.

That’s fair. My hope is that the user will indeed get more say, whether that’s at the level of the browser or the operating system. I’m thinking of a prefers-reduced-data setting, much like prefers-color-scheme or prefers-reduced-motion.

But this issue isn’t something new with speculation rules. We’ve already got service workers, which allow the site author to unilaterally declare that a bunch of pages should be downloaded.

I’m doing that for Resilient Web Design—when you visit the home page, a service worker downloads the whole site. I can justify that decision to myself because the entire site is still smaller in size than one article from Wired or the New York Times. But still, is it right that I get to make that call?

So I’m very much in favour of browsers acting as true user agents—doing what’s best for the user, even in situations where that conflicts with the wishes of a site owner.

Going back to speculation rules, David asked:

Do we really need this kind of (easily turned to evil) enhancement in the current state of (web) affairs?

That question could be asked of many web technologies.

There’s always going to be a tension with any powerful browser feature. The more power it provides, the more it can be abused. Animations, service workers, speculation rules—these are all things that can be used to improve websites or they can be abused to do things the user never asked for.

Or take the elephant in the room: JavaScript.

Right now, a site owner can link to a JavaScript file that’s tens of megabytes in size, and the browser has no alternative but to download it. I’d love it if users could specify a limit. I’d love it even more if browsers shipped with a default limit, especially if that limit is related to the device and network.

I don’t think speculation rules will be abused nearly as much as client-side JavaScript is already abused.

Fluid

I really like the newly-launched website for this year’s XOXO festival. I like that the design is pretty much the same for really small screens, really large screens, and everything in between because everything just scales. It’s simultaneously a flyer, a poster, and a billboard.

Trys has written about the websites he’s noticed using fluid type and spacing: There it is again, that fluid feeling.

I know what he means. I get a similar feeling when I’m on a site that adjusts fluidly to any browser window—it feels very …webby.

I’ve had this feeling before.

When responsive design was on the rise, it was a real treat to come across a responsive site. After a while, it stopped being remarkable. Now if I come across a site that isn’t responsive, it feels broken.

And now it’s a treat to come across a site that uses fluid type. But how long will it be until it feels unremarkable? How will it be until a website that doesn’t use fluid type feels broken?

Speculation rules

There’s a new addition to the latest version of Chrome called speculation rules. This already existed before with a different syntax, but the new version makes more sense to me.

Notice that I called this an addition, not a standard. This is not a web standard, though it may become one in the future. Or it may not. It may wither on the vine and disappear (like most things that come from Google).

The gist of it is that you give the browser one or more URLs that the user is likely to navigate to. The browser can then pre-fetch or even pre-render those links, making that navigation really snappy. It’s a replacement for the abandoned link rel="prerender".

Because this is a unilateral feature, I’m not keen on shipping the code to all browsers. The old version of the API required a script element with a type value of “speculationrules”. That doesn’t do any harm to browsers that don’t support it—it’s a progressive enhancement. But unlike other progressive enhancements, this isn’t something that will just start working in those other browsers one day. I mean, it might. But until this API is an actual web standard, there’s no guarantee.

That’s why I was pleased to see that the new version of the API allows you to use an external JSON file with your list of rules.

I say “rules”, but they’re really more like guidelines. The browser will make its own evaluation based on bandwidth, battery life, and other factors. This feature is more like srcset than source: you give the browser some options, but ultimately you can’t force it to do anything.

I’ve implemented this over on The Session. There’s a JSON file called speculationrules.js with the simplest of suggestions:

{
  "prerender": [{
    "where": {
        "href_matches": "/*"
    },
    "eagerness": "moderate"
  }]
}

The eagerness value of “moderate” says that any link can be pre-rendered if the user hovers over it for 200 milliseconds (the nuclear option would be to use a value of “immediate”).

I still need to point to that JSON file from my HTML. Usually this would be done with something like a link element, but for this particular API, I can send a response header instead:

Speculation-Rules: “/speculationrules.json"

I like that. The response header is being sent to every browser, regardless of whether they support speculation rules or not, but at least it’s just a few bytes. Those other browsers will ignore the header—they won’t download the JSON file.

Here’s the PHP I added to send that header:

header('Speculation-Rules: "/speculationrules.json"');

There’s one extra thing I had to do. The JSON file needs to be served with mime-type of “application/speculationrules+json”. Here’s how I set that up in the .conf file for The Session on Apache:

<IfModule mod_headers.c>
  <FilesMatch "speculationrules.json">
    Header set Content-type application/speculationrules+json
   </FilesMatch>
</IfModule>

A bit of a faff, that.

You can see it in action on The Session. Open up Chrome or Edge (same same but different), fire up the dev tools and keep the network tab open while you navigate around the site. Notice how hovering over a link will trigger a new network request. Clicking on that link will get you that page lickety-split.

Mind you, in the case of The Session, the navigations were already really fast—performance is a feature—so it’s hard to guage how much of a practical difference it makes in this case, but it still seems like a no-brainer to me: taking a few minutes to add this to your site is worth doing.

Oh, there’s one more thing to be aware of when you’re implementing speculation rules. You have the option of excluding URLs from being pre-fetched or pre-rendered. You might need to do this if you’ve got links for adding items to shopping carts, or logging the user out. But my advice would instead be: stop using GET requests for those actions!

Most of the examples given for unsafe speculative loading conditions are textbook cases of when not to use links. Links are for navigating. They’re indempotent. For everthing else, we’ve got forms.

Baseline progressive enhancement

Support for view transitions for regular websites (as opposed to single-page apps) will ship in Chrome 126. As someone who’s a big fan—to put it mildly—I am very happy about this!

Hopefully Firefox and Safari won’t be too far behind. But it’s still worth adding view transitions to your website even if not every browser supports them. They’re the perfect example of a progressive enhancement.

The browsers that don’t yet support view transitions won’t be harmed in any way if you give them the CSS for view transitions. They’ll just ignore it. For users of those browsers, nothing changes.

Then when those browsers do ship support for view transitions, your website automatically gets an upgrade for those users. Code you’ve already written starts working from one day to the next.

Don’t wait, is what I’m saying.

I really like the Baseline initiative as a way to track browser support. It’s great to see it in use on MDN and Can I Use. It’s very handy having a glanceable indication of which browser features are newly available and which are widely available.

But…

Not all browser features work the same way. For features that work as progressive enhancements you don’t need to wait for them to be widely available.

Service workers. Preference queries. View transitions.

If a browser doesn’t support one of those features, that’s fine. Your website won’t break in that browser.

Now that’s not true of all browser features, particularly some JavaScript APIs. If a feature is critical for your site to function then you definitely want to wait until it’s widely supported.

Baseline won’t tell you the difference between those two different kinds of features.

I don’t want Baseline to get too complicated. Like I said, I really like how it’s nice and glanceable right now. But it would be nice if there way some indication that a newly-available feature is a progressive enhancement.

For now it’s up to us to make that distinction. So don’t fall into the trap of thinking that just because a feature isn’t listed as widely-available you can’t use it yet.

Really you want to ask two questions:

  1. How widely available is this feature?
  2. Can this feature be used as a progressive enhancement?

If Baseline tells you that the answer to the first question is “newly-available”, move on to the second question. If the answer to that is “no, it can’t be used as a progressive enhancement”, don’t ship that feature in production just yet.

But if the answer to that second question is “hell yeah, it’s a progressive enhancement!” then go for it, regardless of the answer to the first question.

Y’know, there’s a real irony in a common misunderstanding around progressive enhancement: some people seem to think it’s about not being able to use advanced browser features. In reality it’s the opposite. Progressive enhancement allows you to use advanced browser features even before they’re widely supported.

Responsibility

My colleague Chris has written a terrific post over on the Clearleft blog: Is the planet the missing member of your project team?

Rather than hand-wringing and finger-wagging, it gets down to some practical steps that you—we—can take on every project.

Chris finishes by asking:

Let me know how you design with the environment in mind. What practical advice would you suggest?

Well, here’s something that I keep coming up against…

Chris shows that the environment can be part of project management, specifically the RACI methodology:

We list who is responsible, accountable, consulted, and informed within the project. It’s a simple exercise but the clarity is useful for identifying what expertise and input we should seek from the named individuals.

Having the planet be a proactive partner in your project ensures its needs are considered.

Whenever responsibilities are being assigned there are some things that inevitably fall through the cracks. One I’ve seen over and over again is responsibility for third-party scripts.

On the face of it this seems like another responsibility for developers. We’re talking about code here, right?

But in my experience it is never the developers adding “beacons” and other third-party embedded scripts.

Chris rightly points out:

Development decisions, visual design choices, content approach, and product strategy all contribute to the environmental impact of your website.

But what about sales and marketing? Often they’re the ones who’ll drop in a third-party script to track user journeys. That’s understandable. That’s kind of their job.

Dropping in one line of JavaScript seems like a victimless crime. It’s just one small script, right? But JavaScript can import more JavaScript. Tools like Request Map Generator can show just how much destruction third-party JavaScript can wreak:

You pop in a URL, it fetches the page and maps out all the subsequent requests in a nifty interactive diagram of circles, showing how many requests third-party scripts are themselves generating. I’ve found it to be a very effective way of showing the impact of third-party scripts to people who aren’t interested in looking at waterfall diagrams.

Just to be clear, the people adding third-party scripts to websites usually aren’t doing so maliciously. They often don’t realise the negative effect the scripts will have on performance and the environment.

As is so often the case, this isn’t a technical problem. At root it’s about understanding people’s needs (like “I need a way to see what pages are converting!”) and finding a way to meet those needs without negatively impacting the planet. A good open-minded discussion can go a long way.

So I echo Chris’s call to think about environmental impacts from the very start of a project. Establish early on who will have the ability to add third-party scripts to the site. Do all of those people understand the responsibility that gives them?

I saw this lack of foresight in action on a project recently. The front-end development was going really well and the site was going to be exceptionally performant: green Lighthouse scores across the board. But when the site went live it had tracking scripts. That meant that users needed to consent to being tracked. That meant adding another third-party script to generate a consent banner. It completely tanked the Lighthouse scores.

I’m sure the people who added the tracking scripts and consent banners thought they had no choice. But there are alternatives. There are ways to get the data you need without the intrusive surveillance and performance-wrecking JavaScript.

The problem is that it’s not the norm. “Everyone else is doing it” was the justification for Flash intros two decades ago and it’s the justification for enshittification via third-party scripts now.

It doesn’t have to be this way.