Journal tags: standards

137

Web App install API

My bug report on Apple’s websites-in-the-dock feature on desktop has me thinking about how starkly different it is on mobile.

On iOS if you want to add a website to your home screen, good luck. The option is buried within the “share” menu.

First off, it makes no sense that adding something to your homescreen counts as sharing. Secondly, how is anybody supposed to know that unless they’re explicitly told.

It’s a similar situation on Android. In theory you can prompt the user to install a progressive web app using the botched BeforeInstallPromptEvent. In practice it’s a mess. What it actually does is defer the installation prompt so you can offer it a more suitable time. But it only works if the browser was going to offer an installation prompt anyway.

When does Chrome on Android decide to offer the installation prompt? It’s a mix of required criteria—a web app manifest, some icons—and an algorithmic spell determined by the user’s engagement.

Other browser makers don’t agree with this arbitrary set of criteria. They quite rightly say that a user should be able to add any website to their home screen if they want to.

What we really need is an installation API: a way to programmatically invoke the add-to-homescreen flow.

Now, I know what you’re going to say. The security and UX implications would be dire. But this should obviously be like geolocation or notifications, only available in secure contexts and gated by user interaction.

Think of it like adding something to the clipboard: it’s something the user can do manually, but the API offers a way to do it programmatically without opening it up to abuse.

(I’d really love it if this API also had a declarative equivalent, much like I want button type="share" for the Web Share API. How about button type="install"?)

People expect this to already exist.

The beforeinstallprompt flow is an absolute mess. Users deserve better.

CSS Day 2024

My stint as one of the hosts of CSS Day went very well indeed. I enjoyed myself and people seemed to like the cut of my jib.

During the event there was a real buzz on Mastodon, which was heartening to see. I was beginning to worry that hashtagging events was going to be collatoral damage from Elongate, but there was plenty of conference-induced FOMO to be experienced on the fediverse.

The event itself was, as always, excellent. Both in terms of content and organisation.

Some themes emerged during CSS Day, which I always love to see. These emergent properties are partly down to curation and partly down to serendipity.

The last few years of CSS Day have felt like getting a firehose of astonishing new features being added to the language. There was still plenty of cutting-edge stuff this year—masonry! anchor positioning!—but there was also a feeling of consolidation, asking how to get all this amazing new stuff into our workflows.

Matthias’s opening talk on day one and Stephen’s closing talk on the same day complemented one another perfectly. Both managed to inspire while looking into the nitty-gritty practicalities of the web design process.

It was, astoundingly, Matthias’s first ever conference talk. I have no doubt it won’t be the last—it was great!

I gave Stephen a good-natured roast in my introduction, partly because it was his birthday, partly because we’re old friends, but mostly because it was enjoyable for me to watch him squirm. Of course his talk was, as always, superb. Don’t tell him, but he might be one of my favourite speakers.

The topic of graphic design tools came up more than once. It’s interesting to see how the issues with them have changed. It used to be that design tools—Photoshop, Sketch, Figma—were frustrating because they were writing cheques that CSS couldn’t cash. Now the frustration is the exact opposite. Our graphic design tools aren’t capable of the kind of fluid declarative design we can now accomplish in web browsers.

But the biggest rift remains not with tools or technologies, but with people and mindsets. Our tools can reinforce mindsets but the real divide happens in how different people approach CSS.

Both Josh and Kevin get to the heart of this in their tremendous tutorials, and that was reflected in their talks. They showed the difference between having the bare minimum understanding of CSS in order to get something done as quickly as possible, and truly understanding how CSS works in order to open up a world of possibilities.

For people in the first category, Sarah Dayan was there to sing the praises of utility-first CSS AKA atomic CSS. I commend her bravery!

During the Q&A, I restrained myself from being too Paxmanish. But I did have l’esprit d’escalier afterwards when I realised that the entire talk—and all the answers afterwards—depended on two mutually-incompatiable claims:

  1. The great thing about atomic CSS is that it’s a constrained vocabulary so your team has to conform, and
  2. The other great thing about it is that it’s utility-first, not utility-only so you can break out of it and use regular CSS if you want.

Insert .gif of character from The Office looking to camera.

Most of the questions coming in during the Q&A reflected my own take: how about we use utility classes for some things, but not all things. Seems sensible.

Anyway, regardless of what I or anyone else thinks about the substance of what Sarah was saying, there was no denying that it was a great presentation. They were all great presentations. That’s unusual, and I say that as a conference organiser as well as an attendee. Everyone brings their A-game to CSS Day.

Mind you, it is exhausting. I say it every year, but it always feels like one talk too many. Not that any individual talk wasn’t good, but the sheer onslaught of deep dives into the innards of CSS has my brain exploding before the day is done.

A highlight for me was getting to introduce Fantasai’s talk on the design principles of CSS, which was right up my alley. I don’t think most people realise just how much we owe her for her years of work on standards. The web would be in a worse place without the Herculean work she’s done behind the scenes.

Another highlight was getting to see some of the students I met back in March. They were showing some of their excellent work during the breaks. I find what they’re doing just as inspiring as the speakers on stage.

In fact, when I was filling in the post-conference feedback form, there was a question: “Who would you like to see speak at CSS Day next year?” I was racking my brains because everyone I could immediately think of has already spoken at some point. So I wrote, “It would be great to see some of those students speaking about their work.”

I think it would be genuinely fascinating to get their perspective on what we consider modern CSS, which to them is just CSS.

Either way I’ll back next year for sure.

It’s funny, but usually when a conference is described as “inspiring” it’s because it’s tackling big galaxy-brain questions. But CSS Day is as nitty-gritty as it gets and I found it truly inspiring. Like, I couldn’t wait to open up my laptop and start writing some CSS. That kind of inspiring.

Browser support

There was a discussion at Clearleft recently about browser support. Rich has more details but the gist of it is that, even though we were confident that we had a good approach to browser support, we hadn’t written it down anywhere. Time to fix that.

This is something I had been thinking about recently anyway—see my post about Baseline and progressive enhancement—so it didn’t take too long to put together a document explaining our approach.

You can find it at browsersupport.clearleft.com

We’re not just making it public. We’re releasing it under a Creative Commons attribution license. You can copy this browser-support policy verbatim, you can tweak it, you can change it, you can do what you like. As long you include a credit to Clearleft, you’re all set.

I think this browser-support policy makes a lot of sense. It certainly beats trying to browser support to specific browsers or version numbers:

We don’t base our browser support on specific browser names and numbers. Instead, our support policy is based on the capabilities of those browsers.

The more organisations adopt this approach, the better it is for everyone. Hence the liberal licensing.

So next time your boss or your client is asking what your official browser-support policy is, feel free to use browsersupport.clearleft.com

Speculation rules and fears

After I wrote positively about the speculation rules API I got an email from David Cizek with some legitimate concerns. He said:

I think that this kind of feature is not good, because someone else (web publisher) decides that I (my connection, browser, device) have to do work that very often is not needed. All that blurred by blackbox algorithm in the browser.

That’s fair. My hope is that the user will indeed get more say, whether that’s at the level of the browser or the operating system. I’m thinking of a prefers-reduced-data setting, much like prefers-color-scheme or prefers-reduced-motion.

But this issue isn’t something new with speculation rules. We’ve already got service workers, which allow the site author to unilaterally declare that a bunch of pages should be downloaded.

I’m doing that for Resilient Web Design—when you visit the home page, a service worker downloads the whole site. I can justify that decision to myself because the entire site is still smaller in size than one article from Wired or the New York Times. But still, is it right that I get to make that call?

So I’m very much in favour of browsers acting as true user agents—doing what’s best for the user, even in situations where that conflicts with the wishes of a site owner.

Going back to speculation rules, David asked:

Do we really need this kind of (easily turned to evil) enhancement in the current state of (web) affairs?

That question could be asked of many web technologies.

There’s always going to be a tension with any powerful browser feature. The more power it provides, the more it can be abused. Animations, service workers, speculation rules—these are all things that can be used to improve websites or they can be abused to do things the user never asked for.

Or take the elephant in the room: JavaScript.

Right now, a site owner can link to a JavaScript file that’s tens of megabytes in size, and the browser has no alternative but to download it. I’d love it if users could specify a limit. I’d love it even more if browsers shipped with a default limit, especially if that limit is related to the device and network.

I don’t think speculation rules will be abused nearly as much as client-side JavaScript is already abused.

Speculation rules

There’s a new addition to the latest version of Chrome called speculation rules. This already existed before with a different syntax, but the new version makes more sense to me.

Notice that I called this an addition, not a standard. This is not a web standard, though it may become one in the future. Or it may not. It may wither on the vine and disappear (like most things that come from Google).

The gist of it is that you give the browser one or more URLs that the user is likely to navigate to. The browser can then pre-fetch or even pre-render those links, making that navigation really snappy. It’s a replacement for the abandoned link rel="prerender".

Because this is a unilateral feature, I’m not keen on shipping the code to all browsers. The old version of the API required a script element with a type value of “speculationrules”. That doesn’t do any harm to browsers that don’t support it—it’s a progressive enhancement. But unlike other progressive enhancements, this isn’t something that will just start working in those other browsers one day. I mean, it might. But until this API is an actual web standard, there’s no guarantee.

That’s why I was pleased to see that the new version of the API allows you to use an external JSON file with your list of rules.

I say “rules”, but they’re really more like guidelines. The browser will make its own evaluation based on bandwidth, battery life, and other factors. This feature is more like srcset than source: you give the browser some options, but ultimately you can’t force it to do anything.

I’ve implemented this over on The Session. There’s a JSON file called speculationrules.js with the simplest of suggestions:

{
  "prerender": [{
    "where": {
        "href_matches": "/*"
    },
    "eagerness": "moderate"
  }]
}

The eagerness value of “moderate” says that any link can be pre-rendered if the user hovers over it for 200 milliseconds (the nuclear option would be to use a value of “immediate”).

I still need to point to that JSON file from my HTML. Usually this would be done with something like a link element, but for this particular API, I can send a response header instead:

Speculation-Rules: “/speculationrules.json"

I like that. The response header is being sent to every browser, regardless of whether they support speculation rules or not, but at least it’s just a few bytes. Those other browsers will ignore the header—they won’t download the JSON file.

Here’s the PHP I added to send that header:

header('Speculation-Rules: "/speculationrules.json"');

There’s one extra thing I had to do. The JSON file needs to be served with mime-type of “application/speculationrules+json”. Here’s how I set that up in the .conf file for The Session on Apache:

<IfModule mod_headers.c>
  <FilesMatch "speculationrules.json">
    Header set Content-type application/speculationrules+json
   </FilesMatch>
</IfModule>

A bit of a faff, that.

You can see it in action on The Session. Open up Chrome or Edge (same same but different), fire up the dev tools and keep the network tab open while you navigate around the site. Notice how hovering over a link will trigger a new network request. Clicking on that link will get you that page lickety-split.

Mind you, in the case of The Session, the navigations were already really fast—performance is a feature—so it’s hard to guage how much of a practical difference it makes in this case, but it still seems like a no-brainer to me: taking a few minutes to add this to your site is worth doing.

Oh, there’s one more thing to be aware of when you’re implementing speculation rules. You have the option of excluding URLs from being pre-fetched or pre-rendered. You might need to do this if you’ve got links for adding items to shopping carts, or logging the user out. But my advice would instead be: stop using GET requests for those actions!

Most of the examples given for unsafe speculative loading conditions are textbook cases of when not to use links. Links are for navigating. They’re indempotent. For everthing else, we’ve got forms.

Baseline progressive enhancement

Support for view transitions for regular websites (as opposed to single-page apps) will ship in Chrome 126. As someone who’s a big fan—to put it mildly—I am very happy about this!

Hopefully Firefox and Safari won’t be too far behind. But it’s still worth adding view transitions to your website even if not every browser supports them. They’re the perfect example of a progressive enhancement.

The browsers that don’t yet support view transitions won’t be harmed in any way if you give them the CSS for view transitions. They’ll just ignore it. For users of those browsers, nothing changes.

Then when those browsers do ship support for view transitions, your website automatically gets an upgrade for those users. Code you’ve already written starts working from one day to the next.

Don’t wait, is what I’m saying.

I really like the Baseline initiative as a way to track browser support. It’s great to see it in use on MDN and Can I Use. It’s very handy having a glanceable indication of which browser features are newly available and which are widely available.

But…

Not all browser features work the same way. For features that work as progressive enhancements you don’t need to wait for them to be widely available.

Service workers. Preference queries. View transitions.

If a browser doesn’t support one of those features, that’s fine. Your website won’t break in that browser.

Now that’s not true of all browser features, particularly some JavaScript APIs. If a feature is critical for your site to function then you definitely want to wait until it’s widely supported.

Baseline won’t tell you the difference between those two different kinds of features.

I don’t want Baseline to get too complicated. Like I said, I really like how it’s nice and glanceable right now. But it would be nice if there way some indication that a newly-available feature is a progressive enhancement.

For now it’s up to us to make that distinction. So don’t fall into the trap of thinking that just because a feature isn’t listed as widely-available you can’t use it yet.

Really you want to ask two questions:

  1. How widely available is this feature?
  2. Can this feature be used as a progressive enhancement?

If Baseline tells you that the answer to the first question is “newly-available”, move on to the second question. If the answer to that is “no, it can’t be used as a progressive enhancement”, don’t ship that feature in production just yet.

But if the answer to that second question is “hell yeah, it’s a progressive enhancement!” then go for it, regardless of the answer to the first question.

Y’know, there’s a real irony in a common misunderstanding around progressive enhancement: some people seem to think it’s about not being able to use advanced browser features. In reality it’s the opposite. Progressive enhancement allows you to use advanced browser features even before they’re widely supported.

Displaying HTML web components

Those HTML web components I made for date inputs are very simple. All they do is slightly extend the behaviour of the existing input elements.

This would be the ideal use-case for the is attribute:

<input is="input-date-future" type="date">

Alas, Apple have gone on record to say that they will never ship support for customized built-in elements.

So instead we have to make HTML web components by wrapping existing elements in new custom elements:

<input-date-future>
  <input type="date">
<input-date-future>

The end result is the same. Mostly.

Because there’s now an additional element in the DOM, there could be unexpected styling implications. Like, suppose the original element was direct child of a flex or grid container. Now that will no longer be true.

So something I’ve started doing with HTML web components like these is adding something like this inside the connectedCallback method:

connectedCallback() {
    this.style.display = 'contents';
  …
}

This tells the browser that, as far as styling is concerned, there’s nothing to see here. Move along.

Or you could (and probably should) do it in your stylesheet instead:

input-date-future {
  display: contents;
}

Just to be clear, you should only use display: contents if your HTML web component is augmenting what’s within it. If you add any behaviours or styling to the custom element itself, then don’t add this style declaration.

It’s a bit of a hack to work around the lack of universal support for the is attribute, but it’ll do.

button invoketarget=”share”

I’ve written quite a bit about how I’d like to see a declarative version of the Web Share API. My current proposal involves extending the type attribute on the button element to support a value of “share”.

Well, maybe a different attribute will end up accomplishing what I want! Check out the explainer for the “invokers” proposal over on Open UI. The idea is to extend the button element with a few more attributes.

The initial work revolves around declaratively opening and closing a dialog, which is an excellent use case and definitely where the effort should be focused to begin with.

But there’s also investigation underway into how this could be away to provide a declarative way of invoking JavaScript APIs. The initial proposal is already discussing the fullscreen API, and how it might be invoked like this:

button invoketarget="toggleFullsceen"

They’re also looking into a copy-to-clipboard action. It’s not much of a stretch to go from that to:

button invoketarget="share"

Remember, these are APIs that require a user interaction anyway so extending the button element makes a lot of sense.

I’ll be watching this proposal keenly. I’d love to see some of these JavaScript cowpaths paved with a nice smooth coating of declarative goodness.

Lost in calculation

As well as her personal site, wordridden.com, Jessica also has a professional site, lostintranslation.com.

Both have been online for a very long time. Jessica’s professional site pre-dates the Sofia Coppola film of the same name, which explains how she was able to get that domain name.

Thanks to the internet archive, you can see what lostintranslation.com looked like more than twenty years ago. The current iteration of the site still shares some of that original design DNA.

The most recent addition to the site is a collection of images on the front page: the covers of books that Jessica has translated during her illustrious career. It’s quite an impressive spread!

I used a combination of CSS grid and responsive images to keep the site extremely performant. That meant using a combination of the picture element, source elements, srcset attributes, and the sizes attribute.

That last part always feels weird. I have to tell the browser what sizes the images will displayed at, which can change depending on the viewport width. But I’ve already given that information in the CSS! It feels weird to have to repeat that information in the HTML.

It’s not just about the theoretical niceties of DRY: Don’t Repeat Yourself. There’s the very practical knock-on effects of having to update the same information in two places. If I update the CSS, I need to remember to update the HTML too. Those concerns no longer feel all that separate.

But I get it. Browsers use a look-ahead parser to start downloading images as soon as possible, so I understand why I need to explicitly state what size the images will be displayed at. Still, it feels like something that a computer should be calculating, not something for a human to list out by hand.

But wait! Most of the images on that page also have a loading attribute with a value of “lazy”. That tells browsers they don’t have to download the images immediately. That sort of negates the look-ahead parser.

That’s why the HTML spec now includes a value for the sizes attribute of “auto”. It’s only supposed to be used in conjunction with loading="lazy" (otherwise it means 100vw).

Browser makers are on board with this. You can track the implementation progress for Chromium, WebKit, and Firefox.

I would very much like to see this become a reality!

HTML web components

Web components have been around for quite a while, but it feels like they’re having a bit of a moment right now.

It turns out that the best selling point for web components was “wait and see.” For everyone who didn’t see the benefit of web components over being locked into a specific framework, time is proving to be a great teacher.

It’s not just that web components are portable. They’re also web standards, which means they’ll be around as long as web browsers. No framework can make that claim. As Jake Lazaroff puts it, web components will outlive your JavaScript framework.

At this point React is legacy technology, like Angular. Lots of people are still using it, but nobody can quite remember why. The decision-makers in organisations who chose to build everything with React have long since left. People starting new projects who still decide to build on React are doing it largely out of habit.

Others are making more sensible judgements and, having been bitten by lock-in in the past, are now giving web components a go.

If you’re one of those people making the move from React to web components, there’ll certainly be a bit of a learning curve, but that would be true of any technology change.

I have a suggestion for you if you find yourself in this position. Try not to bring React’s mindset with you.

I’m talking about the way React components are composed. There’s often lots of props doing heavy lifting. The actual component element itself might be empty.

If you want to apply that model to web components, you can. Lots of people do. It’s not unusual to see web components in the wild that look like this:

<my-component></my-component>

The custom element is just a shell. All the actual power is elsewhere. It’s in the JavaScript that does all kinds of clever things with the shadow DOM, templates, and slots.

There is another way. Ask, as Robin does, “what would HTML do?”

Think about composibility with existing materials. Do you really need to invent an entirely new component from scratch? Or can you use HTML up until it reaches its limit and then enhance the markup?

Robin writes:

I don’t think we should see web components like the ones you might find in a huge monolithic React app: your Button or Table or Input components. Instead, I’ve started to come around and see Web Components as filling in the blanks of what we can do with hypertext: they’re really just small, reusable chunks of code that extends the language of HTML.

Dave talks about how web components can be HTML with superpowers. I think that’s a good attitude to have. Instead of all-singing, all-dancing web components, it feels a lot more elegant to use web components to augment your existing markup with just enough extra behaviour.

Where does the shadow DOM come into all of this? It doesn’t. And that’s okay. I’m not saying it should be avoided completely, but it should be a last resort. See how far you can get with the composibility of regular HTML first.

Eric described his recent epiphany with web components. He created a super-slider custom element that wraps around an existing label and input type="range":

You just take some normal HTML markup, wrap it with a custom element, and then write some JS to add capabilities which you can then style with regular CSS!  Everything’s of the Light Side of the Web.  No need to pierce the Vale of Shadows or whatever.

When you wrap some existing markup in a custom element and then apply some new behaviour with JavaScript, technically you’re not doing anything you couldn’t have done before with some DOM traversal and event handling. But it’s less fragile to do it with a web component. It’s portable. It obeys the single responsibility principle. It only does one thing but it does it well.

Jim created an icon-list custom element that wraps around a regular ul populated with li elements. But he feels almost bashful about even calling it a web component:

Maybe I shouldn’t be using the term “web component” for what I’ve done here. I’m not using shadow DOM. I’m not using the templates or slots. I’m really only using custom elements to attach functionality to a specific kind of component.

I think what Eric and Jim are doing is exemplary. See also Zach’s web components.

At the end of his post, Eric says he’d like a nice catchy term for these kinds of web components. In Dave’s catalogue of web components, they’re called “element extensions.” I like that. It’s pretty catchy.

Or we could call them “HTML web components.” If your custom element is empty, it’s not an HTML web component. But if you’re using a custom element to extend existing markup, that’s an HTML web component.

React encouraged a mindset of replacement: “forgot what browsers can do; do everything in a React component instead, even if you’re reinventing the wheel.”

HTML web components encourage a mindset of augmentation instead.

Making the Patterns Day website

I had a lot of fun making the website for Patterns Day.

If you’re interested in the tech stack, here’s what I used:

  1. HTML
  2. CSS

Actually, technically it’s all HTML because the styles are inside a style element rather than a separate style sheet, but you know what I mean. Also, there is technically some JavaScript but all it does is register a service worker that takes care of caching and going offline.

I didn’t use any build tools. There was no pipeline. There is no node_modules folder filling up my hard drive. Nothing was automated. The website was hand-crafted the long hard stupid way.

I started with the content. I wrote out the words and marked them up with the most appropriate HTML elements.

A screenshot of an unstyled web page for Patterns Day.

Time to layer on the presentation.

For the design, I turned to Michelle for help. I gave her a brief, describing the vibe of the conference, and asked her to come up with an appropriate visual language.

Crucially, I asked her not to design a website. Instead I asked her to think about other places where this design language might be used: a poster, social media, anything but a website.

Partly I was doing this for my own benefit. If you give me a pixel-perfect design for a web page and tell me to code it up, either I won’t do it or I won’t enjoy it. I just don’t get any motivation out of that kind of direct one-to-one translation.

But give me guardrails, give me constraints, give me boundary conditions, and off I go!

Michelle was very gracious in dealing with such a finicky client as myself (“Can you try this other direction?”, “Hmm… I think I preferred the first one after all!”) She delivered a colour palette, a type scale, typeface choices, and some wonderful tiling patterns …it is Patterns Day after all!

With just a few extra lines of CSS, the basic typography was in place.

A screenshot of the web page for Patterns Day with web fonts applied.

I started layering on the colours. Even though this was a one-page site, I still made liberal use of custom properties in the CSS. It just feels good to be able to update one value and see the results, well …cascade.

A screenshot of the web page for Patterns Day with colours added.

I had a lot of fun with the tiling background images. SVG was the perfect format for these. And because the tiles were so small in file size, I just inlined them straight into the CSS.

By this point, I felt like I was truly designing in the browser. Adjusting spacing, playing around with layout, and all that squishy stuff. Some of the best results came from happy accidents—the way that certain elements behaved at certain screen sizes would lead me into little experiments that yielded interesting results.

I’m not sure it’s possible to engineer that kind of serendipity in Figma. Figma was the perfect tool for exploring ideas around the visual vocabulary, and for handing over design decisions around colour, typography, and texture. But when it comes to how the content is going to behave on the World Wide Web, nothing beats a browser for fidelity.

A screenshot of the web page for Patterns Day with some changes applied.

By this point I was really sweating the details, like getting the logo just right and adjusting the type scale for different screen sizes. Needless to say, Utopia was a godsend for that.

I was also checking back in with Michelle to get her take on design decisions I was making.

I could’ve kept tinkering but the diminishing returns were a sign that it was time to put this out into the world.

A screenshot of the web page for Patterns Day with the logo in place.

It felt really good to work on a web page like this. It felt like I was getting my hands into the soil of the web. I don’t think it’s an accident that the result turned out to be very performant.

Getting hands-on like this stops me from getting rusty. And honestly, working with CSS these days is a joy. There’s such power to be had from using var() in combination with functions like calc() and clamp(). Layout is a breeze with flexbox and grid. Browser differences are practically non-existent. We’ve never had it so good.

Here’s something I noticed about my relationship to CSS; my brain has finally made the switch to logical properties. Now if I’m looking at some CSS and I see left, right, top, or bottom, it looks like a bug to me. Those directional properties feel loaded with assumptions whereas logical properties feel much more like working with the grain of the web.

Button types

I’ve been banging the drum for a button type="share" for a while now.

I’ve also written about other potential button types. The pattern I noticed was that, if a JavaScript API first requires a user interaction—like the Web Share API—then that’s a good hint that a declarative option would be useful:

The Fullscreen API has the same restriction. You can’t make the browser go fullscreen unless you’re responding to user gesture, like a click. So why not have button type=”fullscreen” in HTML to encapsulate that? And again, the fallback in non-supporting browsers is predictable—it behaves like a regular button—so this is trivial to polyfill.

There’s another “smell” that points to some potential button types: what functionality do browsers provide in their interfaces?

Some browsers provide a print button. So how about button type="print"? The functionality is currently doable with button onclick="window.print()" so this would be a nicer, more declarative way of doing something that’s already possible.

It’s the same with back buttons, forward buttons, and refresh buttons. The functionality is available through a browser interface, and it’s also scriptable, so why not have a declarative equivalent?

How about bookmarking?

And remember, the browser interface isn’t always visible: progressive web apps that launch with minimal browser UI need to provide this functionality.

Šime Vidas was wondering about button type="copy” for copying to clipboard. Again, it’s something that’s currently scriptable and requires a user gesture. It’s a little more complex than the other actions because there needs to be some way of providing the text to be copied, but it’s definitely a valid use case.

  • button type="share"
  • button type="fullscreen"
  • button type="print"
  • button type="bookmark"
  • button type="back"
  • button type="forward"
  • button type="refresh"
  • button type="copy"

Any more?

Days of style and standards

I first spoke at CSS Day in Amsterdam back in 2016. Well, technically it was the HTML Day preceding CSS Day, when I talked about the A element. I spoke at CSS Day again last year, when I gave a presentation about alternative histories of styling.

One of the advantages to having spoken at the event in the past is that I’m offered a complementary ticket to the event every year. That’s an offer I’ve made the most of.

I’ve just returned from the latest iteration of CSS Day. It was, as always, excellent. I’ve said it before and I’ll say it again, but I just love the way that this event treats CSS with the respect it deserves. I always attend thinking “I know CSS”, but I always leave thinking “I learned a lot about CSS!”

The past few years have been incredibly exciting for the language. We’ve been handed feature after feature, including capabilities we were told just weren’t possible: container queries; :has; cascade layers; view transitions!

As Paul points out in his write-up, there’s been a shift in how these features feel too. In the past, the feeling was “there’s some great stuff arriving and it’ll be so cool once we’ve got browser support.” Now the feeling is finally catching up to the reality: these features are here now. If browser support for an exciting feature is still an issue, wait a few weeks.

Mind you, as Paul also points out, maybe that’s down to the decreased diversity in rendering engines. If a feature ships in Chromium, Webkit, and Gecko, then it’s universally supported. On the one hand, that’s great for developers. But on the other hand, it’s not ideal for the ecosystem of the web.

Anyway, as expected, there was a ton of mind-blowing stuff at CSS Day 2023. Most of the talks were deep dives into specific features. Those deep dives were bookended by big-picture opening and closing talks.

Manuel closed out the show by talking about he’s changing the way he writes and thinks about CSS. I think that’s a harbinger of what’s to come in the next year or so. We’ve had this wonderful burst of powerful new features over the past couple of years; I think what we’ll see next is consolidation. Understanding how these separate pieces play well together is going to be very powerful.

Heck, just exploring all the possibilities of custom properties and :has could be revolutionary. When you add in the architectural implications of cascade layers and container queries, it feels like a whole new paradigm waiting to happen.

That was the vibe of Una’s opening talk too. It was a whistle-stop tour of all the amazing features that have already landed, and some that will be with us very soon.

But Una also highlighted the heartbreaking disparity between the brilliant reality of CSS in browsers today versus how the language is perceived.

Look at almost any job posting for front-end development and you’ll see that CSS still isn’t valued as its own skill. Never mind that you could specialise in a subset of CSS—layout, animation, architecture—and provide 10× value to an organisation, the recruiters are going to play it safe and ask you if you know React.

Rachel Nabors and I were chatting about this gap between the real and perceived value of modern CSS. She astutely pointed out that CSS is kind of a victim of its own resilience. The way you wrote CSS ten years ago still works, and will continue to work. That’s by design. Yes, you can write much better, more resilient CSS today, but if those qualities aren’t valued by an organisation, then you’re casting your pearls before swine.

That said, it’s also true that the JavaScript you wrote ten years ago also continues to work today and will continue to work in the future. So why is it that devs seem downright eager to try the latest JavaScript hotness but are reluctant to use CSS that’s been stable for years?

Or perhaps that’s not an accurate representation of the JavaScript ecosystem. It may well be that the eagerness only extends to libraries and frameworks. There’s reluctance to embrace native JavaScript APIs like Proxy or web components. There’s a weird lack of trust in web standards, and an underserved faith in third-party libraries.

Una speculated that CSS needs a rebranding, like we did back in the days of CSS3, a term which didn’t have any technical meaning but helped galvinise excitement.

I’m not so sure. A successful rebranding today becomes a millstone tomorrow. Again, see CSS3.

Una finished with a call-to-action. Let’s work on building the CSS community.

She compared the number of “front-end” conferences dedicated to JavaScript—over 50 listed on one website—to the number of conferences dedicated to CSS. There’s just one. CSS Day.

Heydon wrote:

It occurs to me there are two types of web conferences: know-your-craft conferences and get-ahead conferences. It’s no coincidence there are simultaneously more get-ahead conferences and more JS-framework conferences.

Una encouraged us to organise more gatherings. It doesn’t need to be a conference. It could just be a local meet-up.

I think that’s an excellent suggestion. As Manuel puts it:

My biggest takeaway: The CSS community needs you!

For me, the value of CSS Day was partly in the excellent content being presented, but it was also in the opportunity to gather with like-minded individuals and realise I’m not alone. It’s also too easy to get gaslit by the grift of “modern web development”, which seems to be built on a foundation of user-hostile priorities that don’t make sense to me—over-engineering, intertwingling of concerns, and developer experience über alles. CSS Day was a welcome reminder to fuck that noise.

Add view transitions to your website

I must admit, when Jake told me he was leaving Google, I got very worried about the future of the View Transitions API.

To recap: Chrome shipped support for the API, but only for single page apps. That had me worried:

If the View Transitions API works across page navigations, it could be the single best thing to happen to the web in years.

If the View Transitions API only works for single page apps, it could be the single worst thing to happen to the web in years.

Well, the multi-page version still hasn’t yet shipped in Chrome stable, but it is available in Chrome Canary behind a flag, so it looks like it’s almost here!

Robin took the words out of my mouth:

Anyway, even this cynical jerk is excited about this thing.

Are you the kind of person who flips feature flags on in nightly builds to test new APIs?

Me neither.

But I made an exception for the View Transitions API. So did Dave:

I think the most telling predictor for the success of the multi-page View Transitions API – compared to all other proposals and solutions that have come before it – is that I actually implemented this one. Despite animations being my bread and butter for many years, I couldn’t be arsed to even try any of the previous generation of tools.

Dave’s post is an excellent step-by-step introduction to using view transitions on your website. To recap:

Enable these two flags in Chrome Canary:

chrome://flags#view-transition
chrome://flags#view-transition-on-navigation

Then add this meta element to the head of your website:

<meta name="view-transition" content="same-origin">

You could stop there. If you navigate around your site, you’ll see that the navigations now fade in and out nicely from one page to another.

But the real power comes with transitioning page elements. Basically, you want to say “this element on this page should morph into that element on that page.” And when I say morph, I mean morph. As Dave puts it:

Behind the scenes the browser is rasterizing (read: making an image of) the before and after states of the DOM elements you’re transitioning. The browser figures out the differences between those two snapshots and tweens between them similar to Apple Keynote’s “Magic Morph” feature, the liquid metal T-1000 from Terminator 2: Judgement Day, or the 1980s cartoon series Turbo Teen.

If those references are lost on you, how about the popular kids book series Animorphs?

Some classic examples would be:

  • A thumbnail of a video on one page morphs into the full-size video on the next page.
  • A headline and snippet of an article on one page morphs into the full article on the next page.

I’ve added view transitions to The Session. Where I’ve got index pages with lists of titles, each title morphs into the heading on the next page.

Again, Dave’s post was really useful here. Each transition needs a unique name, so I used Dave’s trick of naming each transition with the ID of the individual item being linked to.

In the recordings section, for example, there might be a link like this on the index page:

<a href="/recordings/7812" style="view-transition-name: recording-7812">The Banks Of The Moy</a>

Which, if you click on it, takes you to the page with this heading:

<h1><span style="view-transition-name: recording-7812">The Banks Of The Moy</span></h1>

Why the span? Well, like Dave, I noticed some weird tweening happening between block and inline elements. Dave solved the problem with width: fit-content on the block-level element. I just stuck in an extra inline element.

Anyway, the important thing is that the name of the view transition matches: recording-7812.

I also added a view transition to pages that have maps. The position of the map might change from page to page. Now there’s a nice little animation as you move from one page with a map to another page with a map.

thesession.org View Transitions

That’s all good, but I found myself wishing that I could just have those enhancements. Every single navigation on the site was triggering a fade in and out—the default animation. I wondered if there was a way to switch off the default fading.

There is! That default animation is happening on a view transition named root. You can get rid of it with this snippet of CSS:

::view-transition-image-pair(root) {
  isolation: auto;
}
::view-transition-old(root),
::view-transition-new(root) {
  animation: none;
  mix-blend-mode: normal;
  display: block;
}

Voila! Now only the view transitions that you name yourself will get applied.

You can adjust the timing, the easing, and the animation properites of your view transitions. Personally, I was happy with the default morph.

In fact, that’s one of the things I like about this API. It’s another good example of declarative design. I say what I want to happen, but I don’t need to specify the details. I’ll let the browser figure all that out.

That’s what’s got me so excited about this API. Yes, it’s powerful. But just as important, it’s got a very low barrier to entry.

Chris has gathered a bunch of examples together in his post Early Days Examples of View Transitions. Have a look around to get some ideas.

If you like what you see, I highly encourage you to add view transitions to your website now.

“But wait,” I hear you cry, “this isn’t supported in any public-facing browser yet!”

To which, I respond “So what?” It’s a perfect example of progressive enhancement. Adding one meta element and a smidgen of CSS will do absolutely no harm to your website. And while no-one will see your lovely view transitions yet, once browsers do start shipping with support for the API, your site will automatically get better.

Your website will be enhanced. Progressively.

Update: Simon Pieters quite rightly warns against adding view transitions to live sites before the API is done:

in general, using features before they ship in a browser isn’t a great idea since it can poison the feature with legacy content that might break when the feature is enabled. This has happened several times and renames or so were needed.

Good point. I must temper my excitement with pragmatism. Let me amend my advice:

I highly encourage you to experiment with view transitions on your website now.

Innovation

I did an episode of the Clearleft podcast on innovation a while back:

Everyone wants to be innovative …but no one wants to take risks.

The word innovation is often bandied about in an unquestioned positive way. But if we acknowledge that innovation is—by definition—risky, then the exhortations sound less positive.

“We provide innovative solutions for businesses!” becomes “We provide risky solutions for businesses!”

I was reminded of this when I saw the website for the Podcast Standards Project. The original text on the website described the project as:

…a grassroots coalition working to establish modern, open standards, to enable innovation in the podcast industry.

I pushed back on that wording (partly because I’ve seen the word “innovation” used as a smoke screen for user-hostile practices like tracking and surveillance). The wording has since changed to:

…a grassroots coalition dedicated to creating standards and practices that improve the open podcasting ecosystem for both listeners and creators.

That’s better. It’s more precise.

Am I nitpicking? Only if you think that “innovation” and “improvement” are synonyms. I don’t think they are.

Innovation implies change. Improvement implies positive change.

Not all change is positive. Not all innovation is positive.

Innovation goes hand in hand with disruption. Again, disruption involves change. But not necessarily positive change.

Think about the antonyms of change and disruption: stasis and stability. Those words don’t sound very exciting, but in some arenas they’re exactly what you should be aiming for; arenas like infrastructure or standards.

Not to get all pace layers-y here, but it seems to me that every endeavour has a sweet spot for innovation. For some projects, too little innovation is bad. For others, too much innovation is worse.

The trick is knowing which kind of project you’re working on.

(As a side note, I think some people use the word innovation to describe the generative, divergent phase of a design project: “how might we come up with innovative new approaches?” But we already have a word to describe the practice of generating novel and interesting ideas. That word isn’t innovation. It’s creativity.)

Browser history

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That’s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Browser bugs like these are very frustrating. There’s nothing you can do from your side other than filing a bug. The locus of control is very much with the developers of the browser.

Still, the occasional regression in a browser is a price I’m willing to pay for a plurality of rendering engines. Call me old-fashioned but I still value the ecological impact of browser diversity.

That said, I understand the argument for converging on a single rendering engine. I don’t agree with it but I understand it. It’s like this…

Back in the bad old days of the original browser wars, the browser companies just made shit up. That made life a misery for web developers. The Web Standards Project knocked some heads together. Netscape and Microsoft would agree to support standards.

So that’s where the bar was set: browsers agreed to work to the same standards, but competed by having different rendering engines.

There’s an argument to be made for raising that bar: browsers agree to work to the same standards, and have the same shared rendering engine, but compete by innovating in all other areas—the browser chrome, personalisation, privacy, and so on.

Like I said, I understand the argument. I just don’t agree with it.

One reason for zeroing in a single rendering engine is that it’s just too damned hard to create or maintain an entirely different rendering engine now that web standards are incredibly powerful and complex. Only a very large company with very deep pockets can hope to be a rendering engine player. Google. Apple. Heck, even Microsoft threw in the towel and abandoned their rendering engine in favour of Blink and V8.

And yet. Andreas Kling recently wrote about the Ladybird browser. How we’re building a browser when it’s supposed to be impossible:

The ECMAScript, HTML, and CSS specifications today are (for the most part) stellar technical documents whose algorithms can be implemented with considerably less effort and guesswork than in the past.

I’ll be watching that project with interest. Not because I plan to use the brower. I’d just like to see some evidence against the complexity argument.

Meanwhile most other browser projects are building on the raised bar of a shared browser engine. Blisk, Brave, and Arc all use Chromium under the hood.

Arc is the most interesting one. Built by the wonderfully named Browser Company of New York, it’s attempting to inject some fresh thinking into everything outside of the rendering engine.

Experiments like Arc feel like they could have more in common with tools-for-thought software like Obsidian and Roam Research. Those tools build knowledge graphs of connected nodes. A kind of hypertext of ideas. But we’ve already got hypertext tools we use every day: web browsers. It’s just that they don’t do much with the accumulated knowledge of our web browsing. Our browsing history is a boring reverse chronological list instead of a cool-looking knowledge graph or timeline.

For inspiration we can go all the way back to Vannevar Bush’s genuinely seminal 1945 article, As We May Think. Bush imagined device, the Memex, was a direct inspiration on Douglas Engelbart, Ted Nelson, and Tim Berners-Lee.

The article describes a kind of hypertext machine that worked with microfilm. Thanks to Tim Berners-Lee’s World Wide Web, we now have a global digital hypertext system that we access every day through our browsers.

But the article also described the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.

Our browsing histories are a kind of associative trail. They’re as unique as fingerprints. Even if everyone in the world started on the same URL, our browsing histories would quickly diverge.

Bush imagined that these associative trails could be shared:

The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Heck, making a useful browsing history could be a real skill:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record.

Taking something personal and making it public isn’t a new idea. It was what drove the wave of web 2.0 startups. Before Flickr, your photos were private. Before Delicous, your bookmarks were private. Before Last.fm, what music you were listening to was private.

I’m not saying that we should all make our browsing histories public. That would be a security nightmare. But I am saying there’s a lot of untapped potential in our browsing histories.

Let’s say we keep our browsing histories private, but make better use of them.

From what I’ve seen of large language model tools, the people getting most use of out of them are training them on a specific corpus. Like, “take this book and then answer my questions about the characters and plot” or “take this codebase and then answer my questions about the code.” If you treat these chatbots as calculators for words they can be useful for some tasks.

Large language model tools are getting smaller and more portable. It’s not hard to imagine one getting bundled into a web browser. It feeds on your browsing history. The bigger your browsing history, the more useful it can be.

Except, y’know, for the times when it just make shit up.

Vannevar Bush didn’t predict a Memex that would hallucinate bits of microfilm that didn’t exist.

Negativity bias

When I wrote about my hopes and fears for the View Transitions API, a few people latched on to this sentiment:

If the View Transitions API only works for single page apps, it could be the single worst thing to happen to the web in years.

But I also wrote:

If the View Transitions API works across page navigations, it could be the single best thing to happen to the web in years.

I think it’s worth focusing on that.

Part of the problem is that I gave my hopes and fears an equal airing. But they’re not equally likely.

Take the possibility that the View Transitions API only ships for single page apps, but never ships for regular page transitions. The consequences of that would be big—the API would act as an incentive to build single page apps. But the likelihood of that happening is small. In fact, according to Jake, there’s already an implemention for page transitions in the works at Chrome.

Now what if the View Transitions API ships for pages? The consequences would be equally big—the API would act as an incentive to ditch single page apps and build in a more performant, resilient way. Best of all, the chances of that happening are very large indeed (pretty much a certainty now, given Jake’s update).

So I made a comparison between both of the consequences, which are equally large, but I didn’t make a corresponding comparison of the likelihoods, which are not equally large. Mea culpa!

I should’ve made it clearer that, although the consequences would be really bad if the View Transitions API only supports single page apps, the actual likelihood of that is pretty slim.

That’s probably my negativity bias showing through. (The reason I have a negativity bias is because I am a human. Like, have you ever noticed that if you get feedback on something and 98% of it is positive, you inevitably fixate on the 2%?)

Anyway, the real takeaway here is that if the View Transitions API ships for pages, then the consequences will be really, really good! It would be another nail in the coffin for monolithic JavaScript frameworks slowing down the web. And best of all, the likelihood of this happening is very high!

So let me amend my closing sentences from my previous post:

If the View Transitions API only works for single page apps—which is very unlikely—it could be the single worst thing to happen to the web in years.

If the View Transitions API works across page navigations—which is very, very likely—it could be the single best thing to happen to the web in years.

The glass is half full and it’s only going to get fuller. Time to start planning for a turbo-charged web now.

If you’ve got a website with full page navigations, start thinking about how you’ll be able to apply the View Transitions API as a progressive enhancement to improve the user experience.

If you’ve got a single page app, start thinking about how to ditch a whole bunch of uneccessary dependencies to make a more lightweight foundation of HTML instead of JavaScript, and still get all those slick transitions you get in a single page app!

Time for transitions

I am simultaneously very excited and very nervous about the View Transitions API.

You may know it by its former name—Shared Element Transitions. The name change is very recent.

I’ve been saying for years that some kind of API like this would be brilliant:

I honestly think if browsers implemented this, 80% of client-rendered Single Page Apps could be done as regular good ol’-fashioned websites.

Miriam Suzanne describes the theory of View Transitions succinctly:

Shared-element transitions are designed to work with standard web navigation across multiple page loads, as well as page transitions in ‘single-page’ apps (often called SPAs).

This all sounds brilliant. But the devil is in the implementation details. Right now, the API only works for single page apps. This is totally understandable. For purely pragmatic reasons, single page apps are a simple use case to solve for. It’s going to take a lot more work to get this API to work for multi-page apps (or as we used to call them, websites).

If we get a View Transitions API that works across page navigations, it could potentially turbo-charge the web. It will act as a disencentive to building single page apps—you’d be able to provide swish transitions without sacrificing performance or resilience at the alter of a heavy-handed JavaScript-only architecture.

But if the API only ever works for single page apps (which is the current situation), then it will act as an incentive to make any kind of website into a single page app, regardless of whether it’s actually the appropriate architecture.

That prospect has me very worried indeed.

I’m making my feelings on this known just in case any of the implementators out there are thinking, “Hey, maybe it’s fine that this API only works for single page apps—I’m sure most people would be happy with that.”

If the View Transitions API works across page navigations, it could be the single best thing to happen to the web in years.

If the View Transitions API only works for single page apps, it could be the single worst thing to happen to the web in years.

Update: Jake says:

We’re currently landing code in Chrome for the MPA version.

Very happy to hear that! It’s already in the spec, but it’s good to hear that the implementation isn’t going to lag too much.

Also, read this follow-up.

Supporting logical properties

I wrote recently about making the switch to logical properties over on The Session.

Initially I tried ripping the band-aid off and swapping out all the directional properties for logical properties. After all, support for logical properties is green across the board.

But then I got some reports of people seeing formating issues. These people were using Safari on devices that could no longer update their operating system. Because versions of Safari are tied to versions of the operating system, there was nothing they could do other than switch to using a different browser.

I’ve said it before and I’ll say it again, but as long as this situation continues, Safari is not an evergreen browser. (I also understand that problem lies with the OS architecture—it must be incredibly frustrating for the folks working on WebKit and/or Safari.)

So I needed to add fallbacks for older browsers that don’t support logical properties. Or, to put it another way, I needed to add logical properties as a progressive enhancement.

“No problem!” I thought. “The way that CSS works, I can just put the logical version right after the directional version.”

element {
  margin-left: 1em;
  margin-inline-start: 1em;
}

But that’s not true in this case. I’m not over-riding a value, I’m setting two different properties.

In a left-to-right language like English it’s true that margin-inline-start will over-ride margin-left. But in a right-to-left language, I’ve just set margin-left and margin-inline-start (which happens to be on the right).

This is a job for @supports!

element {
  margin-left: 1em;
}
@supports (margin-inline-start: 1em) {
  element {
    margin-left: unset;
    margin-inline-start: 1em;
  }
}

I’m doing two things inside the @supports block. I’m applying the logical property I’ve just tested for. I’m also undoing the previously declared directional property.

A value of unset is perfect for this:

The unset CSS keyword resets a property to its inherited value if the property naturally inherits from its parent, and to its initial value if not. In other words, it behaves like the inherit keyword in the first case, when the property is an inherited property, and like the initial keyword in the second case, when the property is a non-inherited property.

Now I’ve got three CSS features working very nicely together:

  1. @supports (also known as feature queries),
  2. logical properties, and
  3. the unset keyword.

For anyone using an up-to-date browser, none of this will make any difference. But for anyone who can’t update their Safari browser because they can’t update their operating system, because they don’t want to throw out their perfectly functional Apple device, they’ll continue to get the older directional properties:

I discovered that my Mom’s iPad was a 1st generation iPad Air. Apple stopped supporting that device in iOS 12, which means it was stuck with whatever version of Safari last shipped with iOS 12.

Alternative stylesheets

My website has different themes you can choose from. I don’t just mean a dark mode. These themes all look very different from one another.

I assume that 99.99% of people just see the default theme, but I keep the others around anyway. Offering different themes was originally intended as a way of showcasing the power of CSS, and specifically the separation of concerns between structure and presentation. I started doing this before the CSS Zen Garden was created. Dave really took it to the next level by showing how the same HTML document could be styled in an infinite number of ways.

Each theme has its own stylesheet. I’ve got a very simple little style switcher on every page of my site. Selecting a different theme triggers a page refresh with the new styles applied and sets a cookie to remember your preference.

I also list out the available stylesheets in the head of every page using link elements that have rel values of alternate and stylesheet together. Each link element also has a title attribute with the name of the theme. That’s the standard way to specify alternative stylesheets.

In Firefox you can switch between the specified stylesheets from the View menu by selecting Page Style (notice that there’s also a No style option—very handy for checking your document structure).

Other browsers like Chrome and Safari don’t do anything with the alternative stylesheets. But they don’t ignore them.

Every browser makes a network request for each alternative stylesheet. The request is non-blocking and seems to be low priority, which is good, but I’m somewhat perplexed by the network request being made at all.

I get why Firefox is requesting those stylesheets. It’s similar to requesting a print stylesheet. Even if the network were to drop, you still want those styles available to the user.

But I can’t think of any reason why Chrome or Safari would download the alternative stylesheets.