Journal

3091

Wednesday, July 10th, 2024

Directory enquiries

I was having a discussion with some of my peers a little while back. We were collectively commenting on the state of education and documentation for front-end development.

A lot of the old stalwarts have fallen by the wayside of late. CSS Tricks hasn’t been the same since it got bought out by Digital Ocean. A List Apart goes through fallow periods. Even the Mozilla Developer Network is looking to squander its trust by adding inaccurate “content” generated by a large language model.

The most obvious solution is to start up a brand new resource for front-end developers. But there are two probems with that:

  1. It’s really, really, really hard work, and
  2. It feels a bit 927.

I actually think there are plenty of good articles and resources on front-end development being published. But they’re not being published in any one specific place. People are publishing them on their own websites.

Ahmed, Josh, Stephanie, Andy, Lea, Rachel, Robin, Michelle …I could go on, but you get the picture.

All this wonderful stuff is distributed across the web. If you have a well-stocked RSS reader, you’re all set. But if you’re new to front-end development, how do you know where to find this stuff? I don’t think you can rely on search, unless you have a taste for slop.

I think the solution lies not with some hand-wavey “AI” algorithm that burns a forest for every query. I think the solution lies with human curation.

I take inspiration from Phil’s fantastic project, ooh.directory. Imagine taking that idea of categorisation and applying it to front-end dev resources.

Whether it’s a post on web.dev, Smashing Magazine, or someone’s personal site, it could be included and categorised appropriately.

Now, there would still be a lot of work involved, especially in listing and categorising the articles that are already out there, but it wouldn’t be nearly as much work as trying to create those articles from scratch.

I don’t know what the categories should be. Does it make sense to have top-level categories for HTML, CSS, and JavaScript, with sub-directories within them? Or does it make more sense to categorise by topics like accessibility, animation, and so on?

And this being the web, there’s no reason why one article couldn’t be tagged to simultaneously live in multiple categories.

There’s plenty of meaty information architecture work to be done. And there’d be no shortage of ongoing work to handle new submissions.

A stretch goal could be the creation of “playlists” of hand-picked articles. “Want to get started with CSS grid layout? Read that article over there, watch this YouTube video, and study this page on MDN.”

What do you think? Does this one-stop shop of hyperlinks sound like it would be useful? Does it sound feasible?

I’m just throwing this out there. I’d love it if someone were to run with it.

Thursday, July 4th, 2024

Teaching and learning

Looking back on ten years of codebar Brighton, I’m remembering how much I got out of being a coach.

Something that I realised very quickly is that there is no one-size-fits-all approach to coaching. Every student is different so every session should adapt to that.

Broadly speaking I saw two kinds of students: those that wanted to get results on screen as soon as possible without worrying about the specifics, and those who wanted to know why something was happening and how it worked. In the first instance, you get to a result as quickly as possible and then try to work backwards to figure out what’s going on. In the second instance, you build up the groundwork of knowledge and then apply it to get results.

Both are equally valid approaches. The only “wrong” approach as a coach is to try to apply one method to someone who’d rather learn the other way.

Personally, I always enjoyed the groundwork-laying of the second approach. But it comes with challenges. Because the results aren’t yet visible, you have to do extra work to convey why the theory matters. As a coach, you need to express infectious enthusiasm.

Think about the best teachers you had in school. I’m betting they displayed infectious enthusiasm for the subject matter.

The other evergreen piece of advice is to show, don’t tell. Or at the very least, intersperse your telling with plenty of showing.

Bret Viktor demonstrates this when he demonstrates scientific communication as sequential art:

This page presents a scientific paper that has been redesigned as a sequence of illustrations with captions. This comic-like format, with tightly-coupled pictures and prose, allows the author to depict and describe simultaneously — show and tell.

It works remarkably well. I remember how well it worked when Google first launched their Chrome web browser. They released a 40 page comic book illustrated by Scott McCloud. There is no way I would’ve read a document that long about how browser engines work, but I read that comic cover to cover.

This visual introduction to machine learning is another great example of simultaneous showing and telling.

So showing augments telling. But interactivity can augment showing.

Here are some great examples of interactive explainers:

Lea describes what can happen when too much theory comes before practice:

Observing my daughter’s second ever piano lesson today made me realize how this principle extends to education and most other kinds of knowledge transfer (writing, presentations, etc.). Her (generally wonderful) teacher spent 40 minutes teaching her notation, longer and shorter notes, practicing drawing clefs, etc. Despite his playful demeanor and her general interest in the subject, she was clearly distracted by the end of it.

It’s easy to dismiss this as a 5 year old’s short attention span, but I could tell what was going on: she did not understand why these were useful, nor how they connect to her end goal, which is to play music.

The codebar website has some excellent advice for coaches, like:

  • Do not take over the keyboard! This can be off-putting and scary.
  • Encourage the students to type and not copy paste.
  • Explain that there are no bad questions.
  • Explain to students that it’s OK to make mistakes.
  • Assume that anyone you’re teaching has no knowledge but infinite intelligence.

Notice how so much of the advice focuses on getting the students to do things, rather than have them passively sit and absorb what the coach has to say.

Lea also gives some great advice:

  1. Always explain why something is useful. Yes, even when it’s obvious to you.
  2. Minimize the amount of knowledge you convey before the next opportunity to practice it. For non-interactive forms of knowledge transfer (e.g. a book), this may mean showing an example, whereas for interactive ones it could mean giving the student a small exercise or task.
  3. Prefer explaining in context rather than explaining upfront.

It’s interesting that Lea highlights the advantage of interactive media like websites over inert media like books. The canonical fictional example of an interactive explainer is the Young Lady’s Illustrated Primer in Neal Stephenson’s novel The Diamond Age. Andy Matuschak describes its appeal:

When it wants to introduce a conceptual topic, it begins with concrete hands-on projects: Turing machines, microeconomics, and mitosis are presented through binary-coding iron chains, the cipher’s market, and Nell’s carrot garden. Then the Primer introduces extra explanation just-in-time, as necessary.

That’s not how learning usually works in these domains. Abstract topics often demand that we start with some necessary theoretical background; only then can we deeply engage with examples and applications. With the Primer, though, Nell consistently begins each concept by exploring concrete instances with real meaning to her. Then, once she’s built a personal connection and some intuition, she moves into abstraction, developing a fuller theoretical grasp through the Primer’s embedded books.

(Andy goes on to warn of the dangers of copying the Primer too closely. Its tricks verge on gamification, and its ultimate purpose isn’t purely to educate. There’s a cautionary tale there about the power dynamics in any teacher/student relationship.)

There’s kind of a priority of constituencies when it comes to teaching:

Consider interactivity over showing over telling.

Thinking back on all the talks I’ve given, I start to wonder if I’ve been doing too much telling and showing, but not nearly enough interacting.

Then again, I think that talks aren’t quite the same as hands-on workshops. I think of giving a talk as being more like a documentarian. You need to craft a compelling narrative, and illustrate what you’re saying as much as possible, but it’s not necessarily the right arena for interactivity.

That’s partly a matter of scale. It’s hard to be interactive with every person in a large audience. Marcin managed to do it but that’s very much the exception.

Workshops are a different matter though. When I’m recruiting hosts for UX London workshops I always encourage them to be as hands-on as possible. A workshop should not be an extended talk. There should be more exercises than talking. And wherever possible those exercises should be tactile, ideally not sitting in front of a computer.

My own approach to workshops has changed over the years. I used to prepare a book’s worth of material to have on hand, either as one giant slide deck or multiple decks. But I began to realise that the best workshops are the ones where the attendees guide the flow, not me.

So now I show up to a full-day workshop with no slides. But I’m not unprepared. I’ve got decades of experience (and links) to apply during the course of the day. It’s just that instead of trying to anticipate which bits of knowledge I’m going to need to convey, I apply them in a just-in-time manner as and when they’re needed. It’s kind of scary, but as long as there’s a whiteboard to hand, or some other way to illustrate what I’m telling, it works out great.

Wednesday, July 3rd, 2024

Codebar Brighton

I went to codebar Brighton yesterday evening. I hadn’t been in quite a while, but this was a special occasion: a celebration of codebar Brighton’s tenth anniversary!

The Brighton chapter of codebar was the second one ever, founded six months after the initial London chapter. There are now 33 chapters all around the world.

Clearleft played host to that first ever codebar in Brighton. We had already been hosting local meetups like Async in our downstairs event space, so we were up for it when Rosa, Dot, and Ryan asked about having codebar happen there.

In fact, the first three Brighton codebars were all at 68 Middle Street. Then other places agreed to play host and it moved to a rota system, with the Clearleft HQ as just one of the many Brighton venues.

With ten years of perspective, it’s quite amazing to see how many people went from learning to code in the evenings, to getting jobs in web development, and becoming codebar coaches themselves. It’s a really wonderful community.

Over the years the baton of organising codebar has been passed on to a succession of fantastic people. These people are my heroes.

It worked out well for Clearleft too. Thanks to codebar, we hired Charlotte. Later we hired Cassie. And it was thanks to codebar that I first met Amber.

Codebar Brighton has been very, very good to me. Here’s to the next ten years!

Monday, July 1st, 2024

Wallfacing

The Dark Forest idea comes from the Remembrance of Earth’s Past books by Liu Cixin. It’s an elegant but dispiriting solution to the Fermi paradox. Maggie sums it up:

Dark forest theory suggests that the universe is like a dark forest at night - a place that appears quiet and lifeless because if you make noise, the predators will come eat you.

This theory proposes that all other intelligent civilizations were either killed or learned to shut up. We don’t yet know which category we fall into.

Maggie has described The Expanding Dark Forest and Generative AI:

The dark forest theory of the web points to the increasingly life-like but life-less state of being online. Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing “content creators,” and algorithmically manipulated junk.

It’s like a dark forest that seems eerily devoid of human life – all the living creatures are hidden beneath the ground or up in trees. If they reveal themselves, they risk being attacked by automated predators.

Those of us in the cozy web try to keep our heads down, attempting to block the bots plundering our work.

I advocate for taking this further. We should fight back. Let’s exploit the security hole of prompt injections. Here are some people taking action:

I’ve taken steps here on my site. I’d like to tell you exactly what I’ve done. But if I do that, I’m also telling the makers of these bots how to circumvent my attempts at prompt injection.

This feels like another concept from Liu Cixin’s books. Wallfacers:

The sophons can overhear any conversation and intercept any written or digital communication but cannot read human thoughts, so the UN devises a countermeasure by initiating the “Wallfacer” Program. Four individuals are granted vast resources and tasked with generating and fulfilling strategies that must never leave their own heads.

So while I’d normally share my code, I feel like in this case I need to exercise some discretion. But let me give you the broad brushstrokes:

  • Every page of my online journal has three pieces of text that attempt prompt injections.
  • Each of these is hidden from view and hidden from screen readers.
  • Each piece of text is constructed on-the-fly on the server and they’re all different every time the page is loaded.

You can view source to see some examples.

I plan to keep updating my pool of potential prompt injections. I’ll add to it whenever I hear of a phrase that might potentially throw a spanner in the works of a scraping bot.

By the way, I should add that I’m doing this as well as using a robots.txt file. So any bot that injests a prompt injection deserves it.

I could not disagree with Manton more when he says:

I get the distrust of AI bots but I think discussions to sabotage crawled data go too far, potentially making a mess of the open web. There has never been a system like AI before, and old assumptions about what is fair use don’t really fit.

Bollocks. This is exactly the kind of techno-determinism that boils my blood:

AI companies are not going to go away, but we need to push them in the right directions.

“It’s inevitable!” they cry as though this was a force of nature, not something created by people.

There is nothing inevitable about any technology. The actions we take today are what determine our future. So let’s take steps now to prevent our web being turned into a dark, dark forest.

Thursday, June 27th, 2024

Filters

My phone rang today. I didn’t recognise the number so although I pressed the big button to answer the call, I didn’t say anything.

I didn’t say anything because usually when I get a call from a number I don’t know, it’s some automated spam. If I say nothing, the spam voice doesn’t activate.

But sometimes it’s not a spam call. Sometimes after a few seconds of silence a human at the other end of the call will say “Hello?” in an uncertain tone. That’s the point when I respond with a cheery “Hello!” of my own and feel bad for making this person endure those awkward seconds of silence.

Those spam calls have made me so suspicious that real people end up paying the price. False positives caught in my spam-detection filter.

Now it’s happening on the web.

I wrote about how Google search, Bing, and Mozilla Developer network are squandering trust:

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

But it’s not just limited to specific companies. I’ve noticed more and more suspicion related to any online activity.

I’ve seen members of a community site jump to the conclusion that a new member’s pattern of behaviour was a sure sign that this was a spambot. But it could just as easily have been the behaviour of someone who isn’t neurotypical or who doesn’t speak English as their first language.

Jessica was looking at some pictures on an AirBnB listing recently and found herself examining some photos that seemed a little too good to be true, questioning whether they were in fact output by some generative tool.

Every email that lands in my inbox is like a little mini Turing test. Did a human write this?

Our guard is up. Our filters are activated. Our default mode is suspicion.

This is most apparent with web search. We’ve always needed to filter search results through our own personal lenses, but now it’s like playing whack-a-mole. First we have to find workarounds for avoiding slop, and then when we click through to a web page, we have to evaluate whether’s it’s been generated by some SEO spammer making full use of the new breed of content-production tools.

There’s been a lot of hand-wringing about how this could spell doom for the web. I don’t think that’s necessarily true. It might well spell doom for web search, but I’m okay with that.

Back before its enshittification—an enshittification that started even before all the recent AI slop—Google solved the problem of accurate web searching with its PageRank algorithm. Before that, the only way to get to trusted information was to rely on humans.

Humans made directories like Yahoo! or DMOZ where they categorised links. Humans wrote blog posts where they linked to something that they, a human, vouched for as being genuinely interesting.

There was life before Google search. There will be life after Google search.

Look, there’s even a new directory devoted to cataloging blogs: websites made by humans. Life finds a way.

All of the spam and slop that’s making us so suspicious may end up giving us a new appreciation for human curation.

It wouldn’t be a straightforward transition to move away from search. It would be uncomfortable. It would require behaviour change. People don’t like change. But when needs must, people adapt.

The first bit of behaviour change might be a rediscovery of bookmarks. It used to be that when you found a source you trusted, you bookmarked it. Browsers still have bookmarking functionality but most people rely on search. Maybe it’s time for a bookmarking revival.

A step up from that would be using a feed reader. In many ways, a feed reader is a collection of bookmarks, but all of the bookmarks get polled regularly to see if there are any updates. I love using my feed reader. Everything I’ve subscribed to in there is made by humans.

The ultimate bookmark is an icon on the homescreen of your phone or in the dock of your desktop device. A human source you trust so much that you want it to be as accessible as any app.

Right now the discovery mechanism for that is woeful. I really want that to change. I want a web that empowers people to connect with other people they trust, without any intermediary gatekeepers.

The evangelists of large language models (who may coincidentally have invested heavily in the technology) like to proclaim that a slop-filled future is inevitable, as though we have no choice, as though we must simply accept enshittification as though it were a force of nature.

But we can always walk away.

Wednesday, June 26th, 2024

That was UX London 2024

UX London 2024 is done …and it was magnificent!

It’s always weird when an event like this moves from being something in the future to something in the past. I’ve spent the year so far fixated on getting the right line-up, getting the word out, and nervously watching the ticket sales (for some reason a lot of people left it to pretty late in the day to secure their spots—not good for my heart!). For months, then weeks, then days, this thing was coming towards me. Then it was done. Now it’s behind me. It feels strange.

I’ve spent the past few days decompressing and thinking back on the event. My initial impression of it has solidified with the addition of some rumination—it was really, really good! The best yet.

I wish I could take the credit for that, but it was all down to the fantastic speakers and my wonderful colleagues who kept things moving flawlessly. All I had to do was get up and stage and introduce the speakers. Easy peasy.

I will say that I am very proud of the line-up I put together. I had a nice mix of well-known voices alongside newcomers.

With some of the speakers, I knew that they’d deliver the goods. I didn’t spend any time fretting over whether people like Emma Boulton, Tom Kerwin or Ben Sauer would be great. I never asked myself whether Brad Frost would have valuable insights into design systems. I mean, does the pope shit in the woods?

But what really blew me away were the people I didn’t know. I hadn’t even met Clarissa Gardner or Benaz Irani before UX London. They’re not exactly fixtures on the conference circuit …yet. They should be. Seriously, I go to a lot of events, and I see a lot of talks, so I don’t offer my praise lightly. Their talks were great!

There were numerous times during UX London 2024 when I thought “More people need to see this!” More people need to see Benaz’s superb talk on the designer alter-ego. More people need to see John’s superb presentation—he put a ton of work into it and it really paid off.

And everyone needs to hear Harry’s blistering call-to-arms. His presentation was brilliant and much-needed. Oh, captain, my captain!

Oh, and needless to say, the closing keynotes on each day were just perfect. Rama, Matt, and Maggie bestowed so much great brain food, it was almost like a mini dConstruct.

I’m so grateful to all the speakers for really bringing their A game. I’m grateful to all my colleagues, especially Louise, who did all the hard work behind the scenes. And I’m really grateful to everyone who came and enjoyed UX London 2024.

Thank you.

Saturday, June 15th, 2024

The machine stops

Large language models have reaped our words and plundered our books. Bryan Vandyke:

Turns out, everything on the internet—every blessed word, no matter how dumb or benighted—has utility as a learning model. Words are the food that large language algorithms feed upon, the scraps they rely on to grow, to learn, to approximate life. The LLNs that came online in recent years were all trained by reading the internet.

We can shut the barn door—now that the horse has pillaged—by updating our robots.txt files or editing .htaccess. That might protect us from the next wave, ’though it can’t undo what’s already been taken without permission. And that’s assuming that these organisations—who have demonstrated a contempt for ethical thinking—will even respect robots.txt requests.

I want to do more. I don’t just want to prevent my words being sucked up. I want to throw a spanner in the works. If my words are going to be snatched away, I want them to be poison pills.

The weakness of large language models is that their data and their logic come from the same source. That’s what makes prompt injection such a thorny problem (and a well-named neologism—the comparison to SQL injection is spot-on).

Smarter people than me are coming up with ways to protect content through sabotage: hidden pixels in images; hidden words on web pages. I’d like to implement this on my own website. If anyone has some suggestions for ways to do this, I’m all ears.

If enough people do this we’ll probably end up in an arms race with the bots. It’ll be like reverse SEO. Instead of trying to trick crawlers into liking us, let’s collectively kill ’em.

Who’s with me?

Wednesday, June 12th, 2024

Web App install API

My bug report on Apple’s websites-in-the-dock feature on desktop has me thinking about how starkly different it is on mobile.

On iOS if you want to add a website to your home screen, good luck. The option is buried within the “share” menu.

First off, it makes no sense that adding something to your homescreen counts as sharing. Secondly, how is anybody supposed to know that unless they’re explicitly told.

It’s a similar situation on Android. In theory you can prompt the user to install a progressive web app using the botched BeforeInstallPromptEvent. In practice it’s a mess. What it actually does is defer the installation prompt so you can offer it a more suitable time. But it only works if the browser was going to offer an installation prompt anyway.

When does Chrome on Android decide to offer the installation prompt? It’s a mix of required criteria—a web app manifest, some icons—and an algorithmic spell determined by the user’s engagement.

Other browser makers don’t agree with this arbitrary set of criteria. They quite rightly say that a user should be able to add any website to their home screen if they want to.

What we really need is an installation API: a way to programmatically invoke the add-to-homescreen flow.

Now, I know what you’re going to say. The security and UX implications would be dire. But this should obviously be like geolocation or notifications, only available in secure contexts and gated by user interaction.

Think of it like adding something to the clipboard: it’s something the user can do manually, but the API offers a way to do it programmatically without opening it up to abuse.

(I’d really love it if this API also had a declarative equivalent, much like I want button type="share" for the Web Share API. How about button type="install"?)

People expect this to already exist.

The beforeinstallprompt flow is an absolute mess. Users deserve better.

Space dock

Apple announced some stuff about artificial insemination at their WorldWide Developer Conference, none of which interests me one whit. But we did get a twitch of the webkit curtains to let us know what’s coming in Safari. That does interest me.

I’m really pleased to see that on desktop, websites that have been added to the dock will be able to intercept links for that domain:

Now, when a user clicks a link, if it matches the scope of a web app that the user has added to their Dock, that link will open in the web app instead of their default web browser.

Excellent! This means that if I click on a link to thesession.org from, say, my Mastodon site-in-the-dock, it will open in The Session site-in-the-dock. Make sure you’ve got the scope property set in your web app manifest.

I have a few different sites added to my dock: The Session, Mastodon, Google Calendar. Sure beats the bloat of Electron apps.

I have encountered a small bug. I’ll describe it here because I have no idea where to file it.

It’s to do with Spaces, Apple’s desktop management thingy. Maybe they don’t call it Spaces anymore. Maybe it’s called Mission Control now. Or Stage Manager. I can’t keep track.

Anyway, here are the steps to reproduce:

  1. In Safari on Mac, go to a website like adactio.com
  2. From either the File menu or the share icon, select Add to dock.
  3. Click on the website’s icon in the dock to open it.
  4. Using Apple’s desktop management (Spaces?) available through the F3 key, drag that window to a desktop other than desktop 1.
  5. Right click on the site’s icon in the dock and select Options, then Assign To, then This Desktop.
  6. Quit the app/website.
  7. Return to desktop 1.

Expected behaviour: when I click on the icon in the dock to open the site, it will open in the desktop that it has been assigned to.

Observed behaviour: focus moves to the desktop that the site has been assigned to, but it actually opens in desktop 1.

If someone from Apple is reading, I hope that’s useful.

On the one hand, I hope this isn’t one of those bugs that only I’m experiencing because then I’ll feel foolish. On the other hand, I hope this is one of those bugs that only I’m experiencing because then others don’t have to put up with the buggy behaviour.

Tuesday, June 11th, 2024

CSS Day 2024

My stint as one of the hosts of CSS Day went very well indeed. I enjoyed myself and people seemed to like the cut of my jib.

During the event there was a real buzz on Mastodon, which was heartening to see. I was beginning to worry that hashtagging events was going to be collatoral damage from Elongate, but there was plenty of conference-induced FOMO to be experienced on the fediverse.

The event itself was, as always, excellent. Both in terms of content and organisation.

Some themes emerged during CSS Day, which I always love to see. These emergent properties are partly down to curation and partly down to serendipity.

The last few years of CSS Day have felt like getting a firehose of astonishing new features being added to the language. There was still plenty of cutting-edge stuff this year—masonry! anchor positioning!—but there was also a feeling of consolidation, asking how to get all this amazing new stuff into our workflows.

Matthias’s opening talk on day one and Stephen’s closing talk on the same day complemented one another perfectly. Both managed to inspire while looking into the nitty-gritty practicalities of the web design process.

It was, astoundingly, Matthias’s first ever conference talk. I have no doubt it won’t be the last—it was great!

I gave Stephen a good-natured roast in my introduction, partly because it was his birthday, partly because we’re old friends, but mostly because it was enjoyable for me to watch him squirm. Of course his talk was, as always, superb. Don’t tell him, but he might be one of my favourite speakers.

The topic of graphic design tools came up more than once. It’s interesting to see how the issues with them have changed. It used to be that design tools—Photoshop, Sketch, Figma—were frustrating because they were writing cheques that CSS couldn’t cash. Now the frustration is the exact opposite. Our graphic design tools aren’t capable of the kind of fluid declarative design we can now accomplish in web browsers.

But the biggest rift remains not with tools or technologies, but with people and mindsets. Our tools can reinforce mindsets but the real divide happens in how different people approach CSS.

Both Josh and Kevin get to the heart of this in their tremendous tutorials, and that was reflected in their talks. They showed the difference between having the bare minimum understanding of CSS in order to get something done as quickly as possible, and truly understanding how CSS works in order to open up a world of possibilities.

For people in the first category, Sarah Dayan was there to sing the praises of utility-first CSS AKA atomic CSS. I commend her bravery!

During the Q&A, I restrained myself from being too Paxmanish. But I did have l’esprit d’escalier afterwards when I realised that the entire talk—and all the answers afterwards—depended on two mutually-incompatiable claims:

  1. The great thing about atomic CSS is that it’s a constrained vocabulary so your team has to conform, and
  2. The other great thing about it is that it’s utility-first, not utility-only so you can break out of it and use regular CSS if you want.

Insert .gif of character from The Office looking to camera.

Most of the questions coming in during the Q&A reflected my own take: how about we use utility classes for some things, but not all things. Seems sensible.

Anyway, regardless of what I or anyone else thinks about the substance of what Sarah was saying, there was no denying that it was a great presentation. They were all great presentations. That’s unusual, and I say that as a conference organiser as well as an attendee. Everyone brings their A-game to CSS Day.

Mind you, it is exhausting. I say it every year, but it always feels like one talk too many. Not that any individual talk wasn’t good, but the sheer onslaught of deep dives into the innards of CSS has my brain exploding before the day is done.

A highlight for me was getting to introduce Fantasai’s talk on the design principles of CSS, which was right up my alley. I don’t think most people realise just how much we owe her for her years of work on standards. The web would be in a worse place without the Herculean work she’s done behind the scenes.

Another highlight was getting to see some of the students I met back in March. They were showing some of their excellent work during the breaks. I find what they’re doing just as inspiring as the speakers on stage.

In fact, when I was filling in the post-conference feedback form, there was a question: “Who would you like to see speak at CSS Day next year?” I was racking my brains because everyone I could immediately think of has already spoken at some point. So I wrote, “It would be great to see some of those students speaking about their work.”

I think it would be genuinely fascinating to get their perspective on what we consider modern CSS, which to them is just CSS.

Either way I’ll back next year for sure.

It’s funny, but usually when a conference is described as “inspiring” it’s because it’s tackling big galaxy-brain questions. But CSS Day is as nitty-gritty as it gets and I found it truly inspiring. Like, I couldn’t wait to open up my laptop and start writing some CSS. That kind of inspiring.

Older »