Journal tags: code

91

Teaching and learning

Looking back on ten years of codebar Brighton, I’m remembering how much I got out of being a coach.

Something that I realised very quickly is that there is no one-size-fits-all approach to coaching. Every student is different so every session should adapt to that.

Broadly speaking I saw two kinds of students: those that wanted to get results on screen as soon as possible without worrying about the specifics, and those who wanted to know why something was happening and how it worked. In the first instance, you get to a result as quickly as possible and then try to work backwards to figure out what’s going on. In the second instance, you build up the groundwork of knowledge and then apply it to get results.

Both are equally valid approaches. The only “wrong” approach as a coach is to try to apply one method to someone who’d rather learn the other way.

Personally, I always enjoyed the groundwork-laying of the second approach. But it comes with challenges. Because the results aren’t yet visible, you have to do extra work to convey why the theory matters. As a coach, you need to express infectious enthusiasm.

Think about the best teachers you had in school. I’m betting they displayed infectious enthusiasm for the subject matter.

The other evergreen piece of advice is to show, don’t tell. Or at the very least, intersperse your telling with plenty of showing.

Bret Viktor demonstrates this when he demonstrates scientific communication as sequential art:

This page presents a scientific paper that has been redesigned as a sequence of illustrations with captions. This comic-like format, with tightly-coupled pictures and prose, allows the author to depict and describe simultaneously — show and tell.

It works remarkably well. I remember how well it worked when Google first launched their Chrome web browser. They released a 40 page comic book illustrated by Scott McCloud. There is no way I would’ve read a document that long about how browser engines work, but I read that comic cover to cover.

This visual introduction to machine learning is another great example of simultaneous showing and telling.

So showing augments telling. But interactivity can augment showing.

Here are some great examples of interactive explainers:

Lea describes what can happen when too much theory comes before practice:

Observing my daughter’s second ever piano lesson today made me realize how this principle extends to education and most other kinds of knowledge transfer (writing, presentations, etc.). Her (generally wonderful) teacher spent 40 minutes teaching her notation, longer and shorter notes, practicing drawing clefs, etc. Despite his playful demeanor and her general interest in the subject, she was clearly distracted by the end of it.

It’s easy to dismiss this as a 5 year old’s short attention span, but I could tell what was going on: she did not understand why these were useful, nor how they connect to her end goal, which is to play music.

The codebar website has some excellent advice for coaches, like:

  • Do not take over the keyboard! This can be off-putting and scary.
  • Encourage the students to type and not copy paste.
  • Explain that there are no bad questions.
  • Explain to students that it’s OK to make mistakes.
  • Assume that anyone you’re teaching has no knowledge but infinite intelligence.

Notice how so much of the advice focuses on getting the students to do things, rather than have them passively sit and absorb what the coach has to say.

Lea also gives some great advice:

  1. Always explain why something is useful. Yes, even when it’s obvious to you.
  2. Minimize the amount of knowledge you convey before the next opportunity to practice it. For non-interactive forms of knowledge transfer (e.g. a book), this may mean showing an example, whereas for interactive ones it could mean giving the student a small exercise or task.
  3. Prefer explaining in context rather than explaining upfront.

It’s interesting that Lea highlights the advantage of interactive media like websites over inert media like books. The canonical fictional example of an interactive explainer is the Young Lady’s Illustrated Primer in Neal Stephenson’s novel The Diamond Age. Andy Matuschak describes its appeal:

When it wants to introduce a conceptual topic, it begins with concrete hands-on projects: Turing machines, microeconomics, and mitosis are presented through binary-coding iron chains, the cipher’s market, and Nell’s carrot garden. Then the Primer introduces extra explanation just-in-time, as necessary.

That’s not how learning usually works in these domains. Abstract topics often demand that we start with some necessary theoretical background; only then can we deeply engage with examples and applications. With the Primer, though, Nell consistently begins each concept by exploring concrete instances with real meaning to her. Then, once she’s built a personal connection and some intuition, she moves into abstraction, developing a fuller theoretical grasp through the Primer’s embedded books.

(Andy goes on to warn of the dangers of copying the Primer too closely. Its tricks verge on gamification, and its ultimate purpose isn’t purely to educate. There’s a cautionary tale there about the power dynamics in any teacher/student relationship.)

There’s kind of a priority of constituencies when it comes to teaching:

Consider interactivity over showing over telling.

Thinking back on all the talks I’ve given, I start to wonder if I’ve been doing too much telling and showing, but not nearly enough interacting.

Then again, I think that talks aren’t quite the same as hands-on workshops. I think of giving a talk as being more like a documentarian. You need to craft a compelling narrative, and illustrate what you’re saying as much as possible, but it’s not necessarily the right arena for interactivity.

That’s partly a matter of scale. It’s hard to be interactive with every person in a large audience. Marcin managed to do it but that’s very much the exception.

Workshops are a different matter though. When I’m recruiting hosts for UX London workshops I always encourage them to be as hands-on as possible. A workshop should not be an extended talk. There should be more exercises than talking. And wherever possible those exercises should be tactile, ideally not sitting in front of a computer.

My own approach to workshops has changed over the years. I used to prepare a book’s worth of material to have on hand, either as one giant slide deck or multiple decks. But I began to realise that the best workshops are the ones where the attendees guide the flow, not me.

So now I show up to a full-day workshop with no slides. But I’m not unprepared. I’ve got decades of experience (and links) to apply during the course of the day. It’s just that instead of trying to anticipate which bits of knowledge I’m going to need to convey, I apply them in a just-in-time manner as and when they’re needed. It’s kind of scary, but as long as there’s a whiteboard to hand, or some other way to illustrate what I’m telling, it works out great.

Codebar Brighton

I went to codebar Brighton yesterday evening. I hadn’t been in quite a while, but this was a special occasion: a celebration of codebar Brighton’s tenth anniversary!

The Brighton chapter of codebar was the second one ever, founded six months after the initial London chapter. There are now 33 chapters all around the world.

Clearleft played host to that first ever codebar in Brighton. We had already been hosting local meetups like Async in our downstairs event space, so we were up for it when Rosa, Dot, and Ryan asked about having codebar happen there.

In fact, the first three Brighton codebars were all at 68 Middle Street. Then other places agreed to play host and it moved to a rota system, with the Clearleft HQ as just one of the many Brighton venues.

With ten years of perspective, it’s quite amazing to see how many people went from learning to code in the evenings, to getting jobs in web development, and becoming codebar coaches themselves. It’s a really wonderful community.

Over the years the baton of organising codebar has been passed on to a succession of fantastic people. These people are my heroes.

It worked out well for Clearleft too. Thanks to codebar, we hired Charlotte. Later we hired Cassie. And it was thanks to codebar that I first met Amber.

Codebar Brighton has been very, very good to me. Here’s to the next ten years!

Schooltijd

I was in Amsterdam last week. Usually I’m in that city for an event like the excellent CSS Day. Not this time. I was there as a guest of Vasilis. He invited me over to bother his students at the CMD (Communications and Multimedia Design) school.

There’s a specific module his students are partaking in that’s right up my alley. They’re given a PDF inheritance-tax form and told to convert it for the web.

Yes, all the excitement of taxes combined with the thrilling world of web forms.

Seriously though, I genuinely get excited by the potential for progressive enhancement here. Sure, there’s the obvious approach of building in layers; HTML first, then CSS, then a sprinkling of JavaScript. But there’s also so much potential for enhancement within each layer.

Got your form fields marked up with the right input types? Great! Now what about autocomplete, inputmode, or pattern attributes?

Got your styles all looking good on the screen? Great! Now what about print styles?

Got form validation working? Great! Now how might you use local storage to save data locally?

As well as taking this practical module, most of the students were also taking a different module looking at creative uses of CSS, like making digital fireworks, or creating works of art with a single div. It was fascinating to see how the different students responded to the different tasks. Some people loved the creative coding and dreaded the progressive enhancement. For others it was exactly the opposite.

Having to switch gears between modules reminded me of switching between prototypes and production:

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

Here’s something I noticed: the students love using :has() in CSS. That’s so great to see! Whereas I might think about how to do something for a few minutes before I think of reaching for :has(), they’ve got front of mind. I’m jealous!

In general, their challenges weren’t with the vocabulary or syntax of HTML, CSS, and JavaScript. The more universal problem was project management. Where to start? What order to do things in? How long to spend on different tasks?

If you can get good at dealing with those questions and not getting overwhelmed, then the specifics of the actual coding will be easier to handle.

This was particularly apparent when it came to JavaScript, the layer of the web stack that was scariest for many of the students.

I encouraged them to break their JavaScript enhancements into two tasks: what you want to do, and how you then execute that.

Start by writing out the logic of your script not in JavaScript, but in whatever language you’re most comfortable with: English, Dutch, whatever. In the course of writing this down, you’ll discover and solve some logical issues. You can also run your plain-language plan past a peer to sense-check it.

It’s only then that you move on to translating your logic into JavaScript. Under each line of English or Dutch, write the corresponding JavaScript. You might as well put // in front of the plain-language sentence while you’re at it to make it a comment—now you’ve got documentation baked in.

You’ll still run into problems at this point, but they’ll be the manageable problems of syntax and typos.

So in the end, it wasn’t my knowledge of specific HTML, CSS, or JavaScript APIs that proved most useful to pass on to the students. It was advice like that around how to approach HTML, CSS, or JavaScript.

I also learned a lot during my time at the school. I had some very inspiring conversations with the web developers of tomorrow. And I was really impressed by how much the students got done just in the three days I was hanging around.

I’d love to do it again sometime.

Federation syndication

I’m quite sure this is of no interest to anyone but me, but I finally managed to fix a longstanding weird issue with my website.

I realise that me telling you about a bug specific to my website is like me telling you about a dream I had last night—fascinating for me; incredibly dull for you.

For some reason, my site was being brought to its knees anytime I syndicated a note to Mastodon. I rolled up my sleeves to try to figure out what the problem could be. I was fairly certain the problem was with my code—I’m not much of a back-end programmer.

My tech stack is classic LAMP: Linux, Apache, MySQL and PHP. When I post a note, it gets saved to my database. Then I make a curl request to the Mastodon API to syndicate the post over there. That’s when my CPU starts climbing and my server gets all “bad gateway!” on me.

After spending far too long pulling apart my PHP and curl code, I had to come to the conclusion that I was doing nothing wrong there.

I started watching which processes were making the server fall over. It was MySQL. That seemed odd, because I’m not doing anything too crazy with my database reads.

Then I realised that the problem wasn’t any particular query. The problem was volume. But it only happened when I posted a note to Mastodon.

That’s when I had a lightbulb moment about how the fediverse works.

When I post a note to Mastodon, it includes a link back to the original note to my site. At this point Mastodon does its federation magic and starts spreading the post to all the instances subscribed to my account. And every single one of them follows the link back to the note on my site …all at the same time.

This isn’t a problem when I syndicate my blog posts, because I’ve got a caching mechanism in place for those. I didn’t think I’d need any caching for little ol’ notes. I was wrong.

A simple solution would be not to include the link back to the original note. But I like the reminder that what you see on Mastodon is just a copy. So now I’ve got the same caching mechanism for my notes as I do for my journal (and I did my links while I was at it). Everything is hunky-dory. I can syndicate to Mastodon with impunity.

See? I told you it would only be of interest to me. Although I guess there’s a lesson here. Something something caching.

Indie Web Camp Nuremberg

After two days at border:none in Nuremberg, it was time for two days at Indie Web Camp, also in Nuremberg.

I hadn’t been to an Indie Web Camp since before The Situation. It felt very good to be back. I had almost forgotten how inspiring and productive they can be.

This one had a good turnout of around twenty people. We had ourselves an excellent first day of thought-provoking sessions. Then on day two it was time to put some of those ideas into action.

A little trick I like to do on the practical day is to have two tasks to attempt: one of them quite simple, and the other more ambitious. That way, as long as I get the simpler task done, I’ll always have at least something to demo at the end of the day.

This time I attempted three bits of home improvement on my website.

Autolinking Mastodon usernames

The first problem I set myself was ostensibly the simple one. But it involved regular expressions, so then I had two problems.

I wanted to automatically link up Mastodon usernames if I mentioned one in my notes. For example, during border:none I mentioned Brian’s mastodon username in a note: @briansuda@loðfíll.is.

That turned out to be an excellent test case. Those Icelandic characters made sure I wasn’t making unwarranted assumptions about character sets.

Here’s the regular expression I came up with. It’s not foolproof by any means. Basically it looks for @something@something.something.

Good enough. Ship it.

Related posts

My next task was a bit more ambitious. It involved SQL queries, something I’m slightly better at than regular expressions but that’s a very low bar.

I wanted to show related posts when you get to the end of one of my blog posts.

I’ve been tagging all my blog posts for years so that’s the mechanism I used for finding similar posts. There’s probably a clever SQL statement that could do this, but I ended up brute-forcing it a bit.

I don’t feel too bad about the hacky clunky nature of my solution, because I cache blog post pages. That means only the first person to view the blog post (usually me) will suffer any performance impacts from my clunky database queries. After that everything’s available straight from a cached file.

Let’s say you’re reading a blog post of mine that I’ve tagged with ten different keywords. I make a separate SQL query for each keyword to get all the other posts that use that tag. Then it’s a matter of sorting through all the results.

I loop through the results of each tag and apply a score to the tagged post. If the post shares one tag with the post you’re looking at, it has a score of one. If it shares two tags, it has a score of two, and so on.

I decided that for a post to be considered related, it had to share at least three tags. I also decided to limit the list of related posts to a maximum of five.

It worked out pretty well. If you scroll down on my recent post about JavaScript, you’ll see links to related posts about JavaScript. If you read through a post on accessibility testing, you’ll find other posts about accessibility testing. If you make it to the end of this post about Mars colonisation you’ll see links to more posts about exploring our solar system.

Right now I’m just doing this for my blog but I’d like to do it for my links too. A job for a future Indie Web Camp.

Link rot

I was very inspired by Remy’s recent post on how he’s tackling link rot on his site. I wanted to do the same for mine.

On the first day at Indie Web Camp I led a session on link rot to gather ideas and alternative approaches. We had a really good discussion, though it’s always worth bearing in mind that there’ll never be a perfect solution. There’ll always be some false positives and some false negatives.

The other Jeremy at Indie Camp Nuremberg blogged about the session. Sebastian Greger was attending remotely and the session inspired him to spend the second day also tackling linkrot.

In the end I decided to stick with Remy’s two-pronged approach:

  1. a client-side script that—as a progressive enhancement—intercepts outbound links and re-routes them to
  2. a server-side script that redirects to the Internet Archive if the link is broken.

Here’s the JavaScript I wrote for the first part.

It’s very similar to Remy’s but with one little addition. I check to see if the clicked link is inside an h-entry and if it is, I pass on the date from the post’s dt-published value.

Here’s the PHP I wrote for the server-side redirector. The comments tell the story of what the code is doing:

  • Check that the request is coming from my site.
  • There also has to be a URL provided in the query string.
  • Make a very quick curl request to get the response headers from the URL. The time limit is set to 1 second.
  • If there was any error (like a time out), give up and go to the URL.
  • Pick the response headers apart to get the HTTP status code.
  • If the response is OK, go to the URL.
  • If the response is a redirect, go around again but this time use the redirect URL.
  • Construct the archive.org search endpoint.
  • If we have a date, provide it. Otherwise ask for the latest snapshot.
  • Ping that archive.org URL. This time there’s no time limit; this might take a while.
  • If there’s an archived copy, redirect to that.
  • There’s no archived copy. Give up and go the URL anyway.

Not perfect by any means, but it works for the most common cases of link rot.

For the demo at the end of the day I went back into my archive of over 10,000 links and plucked out some old posts, like this one from December 2005. It takes a little while to do the rerouting but eventually you get to see the archived version from the same time period as when I linked to it.

Here’s another link from 2005. Here’s another. Those links are broken now, but with a little patience, you’ll still get to read them on the Internet Archive.

The Internet Archive’s wayback machine really is a gift. I can’t imagine how would it be even remotely possible to try to address link rot on my site without archive.org.

I will continue to donate money to the Internet Archive and I encourage you to do the same.

event.target.closest

Eric mentioned the JavaScript closest method. I use it all the time.

When I wrote the book DOM Scripting back in 2005, I’d estimate that 90% of the JavaScript I was writing boiled down to:

  1. Find these particular elements in the DOM and
  2. When the user clicks on one of them, do something.

It wasn’t just me either. I reckon that was 90% of most JavaScript on the web: progressive disclosure widgets, accordions, carousels, and so on.

That’s one of the reasons why jQuery became so popular. That first step (“find these particular elements in the DOM”) used to be a finicky affair involving getElementsByTagName, getElementById, and other long-winded DOM methods. jQuery came along and allowed us to use CSS selectors.

These days, we don’t need jQuery for that because we’ve got querySelector and querySelectorAll (and we can thank jQuery for their existence).

Let’s say you want to add some behaviour to every button element with a class of special. Or maybe you use a data- attribute instead of the class attribute; the same principle applies. You want something special to happen when the user clicks on one of those buttons.

  1. Use querySelectorAll('button.special') to get a list of all the right elements,
  2. Loop through the list, and
  3. Attach addEventListener('click') to each element.

That’s fine for a while. But if you’ve got a lot of special buttons, you’ve now got a lot of event listeners. You might be asking the browser to do a lot of work.

There’s another complication. The code you’ve written runs once, when the page loads. Suppose the contents of the page have changed in the meantime. Maybe elements are swapped in and out using Ajax. If a new special button shows up on the page, it won’t have an event handler attached to it.

You can switch things around. Instead of adding lots of event handlers to lots of elements, you can add one event handler to the root element. Then figure out whether the element that just got clicked is special or not.

That’s where closest comes in. It makes this kind of event handling pretty straightforward.

To start with, attach the event listener to the document:

document.addEventListener('click', doSomethingSpecial, false);

That function doSomethingSpecial will be executed whenever the user clicks on anything. Meanwhile, if the contents of the document are updated via Ajax, no problem!

Use the closest method in combination with the target property of the event to figure out whether that click was something you’re interested in:

function doSomethingSpecial(event) {
  if (event.target.closest('button.special')) {
    // do something
  }
}

There you go. Like querySelectorAll, the closest method takes a CSS selector—thanks again, jQuery!

Oh, and if you want to reduce the nesting inside that function, you can reverse the logic and return early like this:

function doSomethingSpecial(event) {
  if (!event.target.closest('button.special')) return;
  // do something
}

There’s a similar method to closest called matches. But that will only work if the user clicks directly on the element you’re interested in. If the element is nested within other elements, matches might not work, but closest will.

Like Eric said:

Very nice.

Automation

I just described prototype code as code to be thrown away. On that topic…

I’ve been observing how people are programming with large language models and I’ve seen a few trends.

The first thing that just about everyone agrees on is that the code produced by a generative tool is not fit for public consumption. At least not straight away. It definitely needs to be checked and tested. If you enjoy debugging and doing code reviews, this might be right up your street.

The other option is to not use these tools for production code at all. Instead use them for throwaway code. That could be prototyping. But it could also be the code for those annoying admin tasks that you don’t do very often.

Take content migration. Say you need to grab a data dump, do some operations on the data to transform it in some way, and then pipe the results into a new content management system.

That’s almost certainly something you’d want to automate with bespoke code. Once the content migration is done, the code can be thrown away.

Read Matt’s account of coding up his Braggoscope. The code needed to spider a thousand web pages, extract data from those pages, find similarities, and output the newly-structured data in a different format.

I’ve noticed that these are just the kind of tasks that large language models are pretty good at. In effect you’re training the tool on your own very specific data and getting it to do your drudge work for you.

To me, it feels right that the usefulness happens on your own machine. You don’t put the machine-generated code in front of other humans.

Coding prototypes

We do quite a bit of prototyping at Clearleft. There’s no better way to reduce risk than to get something in front of users as quickly as possible to test whether you’re on the right track or not.

As Benjamin said in the podcast episode on prototyping:

It’s something to look at, something to prod. And ideally you’re trying to work out what works and what doesn’t.

Sometimes the prototype is mocked up in Figma. Preferably it’s built in code—HTML, CSS, and JavaScript. Having a prototype built in the materials of the medium helps establish a plausible suspension of disbelief during testing.

Also, as Trys said in that same podcast episode:

Prototypical code isn’t production code. It’s quick and it’s often a little bit dirty and it’s not really fit for purpose in that final deliverable. But it’s also there to be inspiring and to gather a team and show that something is possible.

I can’t reiterate that enough: prototype code isn’t production code.

I’ve written about the two different mindsets before:

So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.

Addy recently wrote an excellent blog post on the topic of prototyping. The value of a prototype is in the insight it imparts, not the code.

It’s crucial to remember that in a prototype, the code serves merely as a medium—a way to facilitate understanding. It’s a means to an end, not the end itself. The code of a prototype is disposable and mutable. In contrast, the lessons learned from a prototype, the insights gained from user interaction and feedback, are far more durable and impactful.

This!

It can be tempting to re-use code from a prototype. I get it. It seems like a waste to throw away code and build something from scratch. But trust me—and I speak from experience here—it will take more time to wrangle prototype code into something that’s production-ready.

The problem is that quality is often invisible. Think about semantics, performance, security, privacy, and accessibility. Those matter—for production code—but they’re under the surface. For someone who doesn’t understand the importance of those hidden qualities a prototype that looks like it works seems ready to ship. It’s understandable that they’d balk at the idea of just throwing that code away and writing new code. Sometimes the suspension of disbelief that a prototype is aiming for works too well.

As is so often the case, this isn’t a technical problem. It’s a communication issue.

Steam

Picture someone tediously going through a spreadsheet that someone else has filled in by hand and finding yet another error.

“I wish to God these calculations had been executed by steam!” they cry.

The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world’s first computer.

His difference engine didn’t work out. Neither did his analytical engine. He’d spend his later years taking his frustrations out on street musicians, which—as a former busker myself—earns him a hairy eyeball from me.

But we’ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this is exactly what machines should be doing?

I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.

Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.

I don’t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.

You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I’d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.

I’ll give you an example based on my own experience.

Regular expressions are my kryptonite. I’m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that’s because I kind of resent having to internalise that knowledge. It doesn’t feel like something a human should have to know. “I wish to God these regular expressions had been calculated by steam!”

Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I’m writing for a machine now gets written by a machine. That seems fair.

Mind you, I wouldn’t blindly trust the output. I’d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. “Explain what this regular expression does,” would be my prompt. If my input into the first chatbot matches the output of the second, I’d have some confidence in using the regular expression.

A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.

Again, I wouldn’t use that output without checking it first. But again, I might use another chatbot to do that checking. “Explain what this SQL statement does.”

Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.

Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. “I wish to God these large language model outputs had been validated by steam!”

Sounds like a job for machines.

Web Audio API update on iOS

I documented a weird bug with web audio on iOS a while back:

On some pages of The Session, as well as the audio player for tunes (using the Web Audio API) there are also embedded YouTube videos (using the video element). Press play on the audio player; no sound. Press play on the YouTube video; you get sound. Now go back to the audio player and suddenly you do get sound!

It’s almost like playing a video or audio element “kicks” the browser into realising it should be playing the sound from the Web Audio API too.

This was happening on iOS devices set to mute, but I was also getting reports of it happening on devices with the sound on. But it’s that annoyingly intermittent kind of bug that’s really hard to reproduce consistently. Sometimes the sound doesn’t play. Sometimes it does.

I found a workaround but it was really hacky. By playing a one-second long silent mp3 file using audio, you could “kick” the sound into behaving. Then you can use the Web Audio API and it would play consistently.

Well, that’s all changed with the latest release of Mobile Safari. Now what happens is that the Web Audio stuff plays …for one second. And then stops.

I removed the hacky workaround and the Web Audio API started behaving itself again …but your device can’t be set to silent.

The good news is that the Web Audio behaviour seems to be consistent now. It only plays if the device isn’t muted. This restriction doesn’t apply to video and audio elements; they will still play even if your device is set to silent.

This descrepancy between the two different ways of playing audio is kind of odd, but at least now the Web Audio behaviour is predictable.

You can hear the Web Audio API in action by going to any tune on The Session and pressing the “play audio” button.

In between

I was chatting with my new colleague Alex yesterday about a link she had shared in Slack. It was the Nielsen Norman Group’s annual State of Mobile User Experience report.

There’s nothing too surprising in there, other than the mention of Apple’s app clips and Google’s instant apps.

Remember those?

Me neither.

Perhaps I lead a sheltered existence, but as an iPhone user, I don’t think I’ve come across a single app clip in the wild.

I remember when they were announced. I was quite worried about them.

See, the one thing that the web can (theoretically) offer that native can’t is instant access to a resource. Go to this URL—that’s it. Whereas for a native app, the flow is: go to this app store, find the app, download the app.

(I say that the benefit is theoretical because the website found at the URL should download quickly—the reality is that the bloat of “modern” web development imperils that advantage.)

App clips—and instant apps—looked like a way to route around the convoluted install process of native apps. That’s why I was nervous when they were announced. They sounded like a threat to the web.

In reality, the potential was never fulfilled (if my own experience is anything to go by). I wonder why people didn’t jump on app clips and instant apps?

Perhaps it’s because what they promise isn’t desirable from a business perspective: “here’s a way for users to accomplish their tasks without downloading your app.” Even though app clips can in theory be a stepping stone to installing the full app, from a user’s perspective, their appeal is the exact opposite.

Or maybe they’re just too confusing to understand. I think there’s an another technology that suffers from the same problem: progressive web apps.

Hear me out. Progressive web apps are—if done well—absolutely amazing. You get all of the benefits of native apps in terms of UX—they even work offline!—but you retain the web’s frictionless access model: go to a URL; that’s it.

So what are they? Are they websites? Yes, sorta. Are they apps? Yes, sorta.

That’s confusing, right? I can see how app clips and instant apps sound equally confusing: “you can use them straight away, like going to a web page, but they’re not web pages; they’re little bits of apps.”

I’m mostly glad that app clips never took off. But I’m sad that progressive web apps haven’t taken off more. I suspect that their fates are intertwined. Neither suffer from technical limitations. The problem they both face is inertia:

The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.

True of progressive web apps. Equally true of app clips.

But when I was chatting to Alex, she made me look at app clips in a different way. She described a situation where somebody might need to interact with some kind of NFC beacon from their phone. Web NFC isn’t supported in many browsers yet, so you can’t rely on that. But you don’t want to make people download a native app just to have a quick interaction. In theory, an app clip—or instant app—could do the job.

In that situation, app clips aren’t a danger to the web—they’re polyfills for hardware APIs that the web doesn’t yet support!

I love having my perspective shifted like that.

The specific situations that Alex and I were discussing were in the context of museums. Musuems offer such interesting opportunities for the physical and the digital to intersect.

Remember the pen from Cooper Hewitt? Aaron spoke about it at dConstruct 2014—a terrific presentation that’s well worth revisiting and absorbing.

The other dConstruct talk that’s very relevant to this liminal space between the web and native apps is the 2012 talk from Scott Jenson. I always thought the physical web initiative had a lot of promise, but it may have been ahead of its time.

I loved the thinking behind the physical web beacons. They were deliberately dumb, much like the internet itself. All they did was broadcast a URL. That’s it. All the smarts were to be found at the URL itself. That meant a service could get smarter over time. It’s a lot easier to update a website than swap out a piece of hardware.

But any kind of technology that uses Bluetooth, NFC, or other wireless technology has to get over the discovery problem. They’re invisible technologies, so by default, people don’t know they’re even there. But if you make them too discoverable— intrusively announcing themselves like one of the commercials in Minority Report—then they’re indistinguishable from spam. There’s a sweet spot of discoverability right in the middle that’s hard to get right.

Over the past couple of years—accelerated by the physical distancing necessitated by The Situation—QR codes stepped up to the plate.

They still suffer from some discoverability issues. They’re not human-readable, so you can’t be entirely sure that the URL you’re going to go to isn’t going to be a Rick Astley video. But they are visible, which gives them an advantage over hidden wireless technologies.

They’re cheaper too. Printing a QR code sticker costs less than getting a plastic beacon shipped from China.

QR codes turned out to be just good enough to bridge the gap between the physical and digital for those one-off interactions like dining outdoors during a pandemic:

I can see why they chose the web over a native app. Online ordering is the only way to place your order at this place. Telling people “You have to go to this website” …that seems reasonable. But telling people “You have to download this app” …that’s too much friction.

Ironically, the nail in the coffin for app clips and instant apps might’ve been hammered in by Apple and Google when they built QR-code recognition into their camera software.

Syndicating to Mastodon

I’ve been contemplating a checkbox. The label for this checkbox reads:

This is a bot account

Let me back up…

In what seems like decades ago, but was in fact just a few weeks, Elon Musk bought Twitter and began burning it to the ground. His admirers insist he’s playing some form of four-dimensional chess, but to the rest of us, his actions are indistinguishable from a spoilt rich kid not understanding what a social network is.

It wasn’t giving me much cause for anguish personally. For the past eight years, I’ve only used Twitter as a syndication endpoint for my own notes. But I understand that’s a very privileged position to be in. Most people on Twitter don’t have the same luxury of independence. It’s genuinely maddening and saddening to see their years of sharing destroyed by one cruel idiot.

Lots of people started moving to Mastodon. I figured I should do the same for my syndicated notes.

At first, I signed up for an account on mastodon.cloud. No particular reason. But that’s where I saw this very insightful post from Anil Dash:

When it came time to reckon with social media’s failings, nobody ran to the “web3” platforms. Nobody asked “can I get paid per message”? Nobody asked about the blockchain. The community of people who’ve been quietly doing this work for years (decades!) ended up being the ones who welcomed everyone over, as always.

I was getting my account all set up and beginning to follow some other folks, when I realised that I actually already had an exisiting account over on mastodon.social. Doh! Turns out that I signed up back in 2017 to kick the tyres, but never did much else because there weren’t many other people around back then. Oh, how times have changed!

Anyway, I thought I had really screwed up by having two accounts but this turned out to be an opportunity to experience some of the thoughtfulness in Mastodon’s design. The process of migrating from one Mastodon account to another—on a completely different instance—was very smooth! It was clear that this wasn’t an afterthought. This is an essential part of the fediverse and the design of the migration flow reflects that.

This gives me enormous peace of mind. If I ever want to switch to a different instance and still keep my network intact, I know it won’t be a problem. Mastodon is like the opposite of the roach-motel mentality that permeates most VC-backed so-called social networks.

As I played around some more—reading, following, exploring—my feelings of fondness only grew stronger. I like this place a lot!

I definitely wanted to syndicate my notes to Mastodon. At first, I implemented a straightforward RSS-to-Mastodon syndication using IFTTT (IF This, Then That), thanks to Matthias’s excellent tutorial.

But that didn’t feel quite right. When I syndicate to Twitter, I make a conscious choice each time. There’s a “Twitter” toggle that I can enable or disable in my posting interface. Mastodon deserved the same level of thoughtfulness.

So I switched off the IFTTT recipe and started exploring the Mastodon API. It’s going to sound like a humblebrag when I tell you that I got cross-posting working in almost no time at all, but that’s not a testament to my coding prowess (I’m really not very good), but rather a testament to the Mastodon API, which was a joy to work with.

  1. On your Mastodon instance, go to /settings/applications.
  2. Click on New Application.
  3. Fill in the details about your website and select write:statuses (and probably write:media) from the Scopes list.
  4. Copy Your access token to use in API calls.
  5. Write some sloppy code (in my case, PHP that uses CURL).

I did hit a wall when it came to posting images. That took me a while to get working, and I couldn’t figure out why. Was it something at Mastodon’s end while it was struggling under the influx of new users? As it turns out, no. It was entirely down to me being an idiot. (You know that situation where you’re working on a problem for ages and you’ve become convinced it’s an extremely gnarly rocket-science problem, but then turns out to be something stupid like a typo? Yeah. That.)

Then there’s the whole question of how to receive replies, likes, and reboosts from Mastodon here on my own site. Luckily, that was super easy, thanks to Brid.gy. One click and I was done. I love Brid.gy!

Take this note, for example. There’s a version on Twitter and a version on Mastodon. The original version on my own site gets responses from both places.

If I’m replying to a response on Twitter, I do not syndicate that to Mastodon.

Likewise, if I’m replying to a response on Mastodon, I do not syndicate that to Twitter.

Oh, one thing worth mentioning: if you’re sending a reply to something on Mastodon using the API, there’s an in_reply_to_id field for you to provide. But you should also include the full @username@instance of the person you’re replying to at the beginning of the message to ensure that it’s displayed as a reply rather than showing up as a regular post. Note the difference between this note on my site and its syndicated version on Mastodon.

Anyway, now I’m posting to Mastodon, but I’m doing it through the the interface of my own website. Which brings me to that checkbox in Mastodon’s profile settings:

This is a bot account

The help text reads:

Signal to others that the account mainly performs automated actions and might not be monitored

If I were doing the automatic cross-posting from RSS, I’d definitely tick that box. But as I’m making a conscious decision whenever I syndicate to Mastodon, I think I’m going to leave that checkbox unticked.

My cross-posting is not automated and I’m very much monitoring my Mastodon account …because I’m enjoying my Mastodon experience more than I’ve enjoyed anything online for quite some time. Highly recommended!

No code

When I wrote about democratising dev, I made brief mention of the growing “no code” movement:

Personally, I would love it if the process of making websites could be democratised more. I’ve often said that my nightmare scenario for the World Wide Web would be for its fate to lie in the hands of an elite priesthood of programmers with computer science degrees. So I’m all in favour of no-code tools …in theory.

But I didn’t describe what no-code is, as I understand it.

I’m taking the term at face value to mean a mechanism for creating a website—preferably on a domain you control—without having to write anything in HTML, CSS, JavaScript, or any back-end programming language.

By that definition, something like WordPress.com (as opposed to WordPress itself) is a no-code tool:

Create any kind of website. No code, no manuals, no limits.

I’d also put Squarespace in the same category:

Start with a flexible template, then customize to fit your style and professional needs with our website builder.

And its competitor, Wix:

Discover the platform that gives you the freedom to create, design, manage and develop your web presence exactly the way you want.

Webflow provides the same kind of service, but with a heavy emphasis on marketing websites:

Your website should be a marketing asset, not an engineering challenge.

Bubble is trying to cover a broader base:

Bubble lets you create interactive, multi-user apps for desktop and mobile web browsers, including all the features you need to build a site like Facebook or Airbnb.

Wheras Carrd opts for a minimalist one-page approach:

Simple, free, fully responsive one-page sites for pretty much anything.

All of those tools emphasise that don’t need to need to know how to code in order to have a professional-looking website. But there’s a parallel universe of more niche no-code tools where the emphasis is on creativity and self-expression instead of slickness and professionalism.

neocities.org:

Create your own free website. Unlimited creativity, zero ads.

mmm.page:

Make a website in 5 minutes. Messy encouraged.

hotglue.me:

unique tool for web publishing & internet samizdat

I’m kind of fascinated by these two different approaches: professional vs. expressionist.

I’ve seen people grapple with this question when they decide to have their own website. Should it be a showcase of your achievements, almost like a portfolio? Or should it be a glorious mess of imagery and poetry to reflect your creativity? Could it be both? (Is that even doable? Or desirable?)

Robin Sloan recently published his ideas—and specs—for a new internet protocol called Spring ’83:

Spring ‘83 is a protocol for the transmission and display of something I am calling a “board”, which is an HTML fragment, limited to 2217 bytes, unable to execute JavaScript or load external resources, but otherwise unrestricted. Boards invite publishers to use all the richness of modern HTML and CSS. Plain text and blue links are also enthusiastically supported.

It’s not a no-code tool (you need to publish in HTML), although someone could easily provide a no-code tool to sit on top of the protocol. Conceptually though, it feels like it’s an a similar space to the chaotic good of neocities.org, mmm.page, and hotglue.me with maybe a bit of tilde.town thrown in.

It feels like something might be in the air. With Spring ’83, the Block protocol, and other experiments, people are creating some interesting small pieces that could potentially be loosely joined. No code required.

Democratising dev

I met up with a supersmart programmer friend of mine a little while back. He was describing some work he was doing with React. He was joining up React components. There wasn’t really any problem-solving or debugging—the individual components had already been thoroughly tested. He said it felt more like construction than programming.

My immediate thought was “that should be automated.”

Or at the very least, there should be some way for just about anyone to join those pieces together rather than it requiring a supersmart programmer’s time. After all, isn’t that the promise of design systems and components—freeing us up to tackle the meaty problems instead of spending time on the plumbing?

I thought about that conversation when I was listening to Laurie’s excellent talk in Berlin last month.

Chatting to Laurie before the talk, he was very nervous about the conclusion that he had reached and was going to share: that the time is right for web development to be automated. He figured it would be an unpopular message. Heck, even he didn’t like it.

But I reminded him that it’s as old as the web itself. I’ve seen videos from very early World Wide Web conferences where Tim Berners-Lee was railing against the idea that anyone would write HTML by hand. The whole point of his WorldWideWeb app was that anyone could create and edit web pages as easily as word processing documents. It’s almost an accident of history that HTML happened to be just easy enough—but also just powerful enough—for many people to learn and use.

Anyway, I thoroughly enjoyed Laurie’s talk. (Except for a weird bit where he dunks on people moaning about “the fundamentals”. I think it’s supposed to be punching up, but I’m not sure that’s how it came across. As Chris points out, fundamentals matter …at least when it comes to concepts like accessibility and performance. I think Laurie was trying to dunk on people moaning about fundamental technologies like languages and frameworks. Perhaps the message got muddled in the delivery.)

I guess Laurie was kind of talking about this whole “no code” thing that’s quite hot right now. Personally, I would love it if the process of making websites could be democratised more. I’ve often said that my nightmare scenario for the World Wide Web would be for its fate to lie in the hands of an elite priesthood of programmers with computer science degrees. So I’m all in favour of no-code tools …in theory.

The problem is that unless they work 100%, and always produce good accessible performant code, then they’re going to be another example of the law of leaky abstractions. If a no-code tool can get someone 90% of the way to what they want, that seems pretty good. But if that person than has to spend an inordinate amount of time on the remaining 10% then all the good work of the no-code tool is somewhat wasted.

Funnily enough, the person who coined that law, Joel Spolsky, spoke right after Laurie in Berlin. The two talks made for a good double bill.

(I would link to Joel’s talk but for some reason the conference is marking the YouTube videos as unlisted. If you manage to track down a URL for the video of Joel’s talk, let me know and I’ll update this post.)

In a way, Joel was making the same point as Laurie: why is it still so hard to do something on the web that feels like it should be easily repeatable?

He used the example of putting an event online. Right now, the most convenient way to do it is to use a third-party centralised silo like Facebook. It works, but now the business model of Facebook comes along for the ride. Your event is now something to be tracked and monetised by advertisers.

You could try doing it yourself, but this is where you’ll run into the frustrations shared by Joel and Laurie. It’s still too damn hard and complicated (even though we’ve had years and years of putting events online). Despite what web developers tell themselves, making stuff for the web shouldn’t be that complicated. As Trys put it:

We kid ourselves into thinking we’re building groundbreakingly complex systems that require bleeding-edge tools, but in reality, much of what we build is a way to render two things: a list, and a single item. Here are some users, here is a user. Here are your contacts, here are your messages with that contact. There ain’t much more to it than that.

And yet here we are. You can either have the convenience of putting something on a silo like Facebook, or you can have the freedom of doing it yourself, indie web style. But you can’t have both it seems.

This is a criticism often levelled at the indie web. The barrier to entry to having your own website is too high. It’s a valid criticism. To have your own website, you need to have some working knowledge of web hosting and at least some web technologies (like HTML).

Don’t get me wrong. I love having my own website. Like, I really love it. But I’m also well aware that it doesn’t scale. It’s unreasonable to expect someone to learn new skills just to make a web page about, say, an event they want to publicise.

That’s kind of the backstory to the project that Joel wanted to talk about: the block protocol. (Note: it has absolutely nothing to do with blockchain—it’s just an unfortunate naming collision.)

The idea behind the project is to create a kind of crowdsourced pattern library—user interfaces for creating common structures like events, photos, tables, and lists. These patterns already exist in today’s silos and content management systems, but everyone is reinventing the wheel independently. The goal of this project is make these patterns interoperable, and therefore portable.

At first I thought that would be a classic /927 situation, but I’m pleased to see that the focus of the project is not on formats (we’ve been there and done that with microformats, RDF, schema.org, yada yada). The patterns might end up being web components or they might not. But the focus is on the interface. I think that’s a good approach.

That approach chimes nicely with one of the principles of the indie web:

UX and design is more important than protocols, formats, data models, schema etc. We focus on UX first, and then as we figure that out we build/develop/subset the absolutely simplest, easiest, and most minimal protocols and formats sufficient to support that UX, and nothing more. AKA UX before plumbing.

That said, I don’t think this project is a cure-all. Interoperable (portable) chunks of structured content would be great, but that’s just one part of the challenge of scaling the indie web. You also need to have somewhere to put those blocks.

Convenience isn’t the only thing you get from using a silo like Facebook, Twitter, Instagram, or Medium. You also get “free” hosting …until you don’t (see GeoCities, MySpace, and many, many more).

Wouldn’t it be great if everyone had a place on the web that they could truly call their own? Today you need to have an uneccesary degree of technical understanding to publish something at a URL you control.

I’d love to see that challenge getting tackled.

Suspicion

I’ve already had some thoughtful responses to yesterday’s post about trust. I wrapped up my thoughts with a request:

I would love it if someone could explain why they avoid native browser features but use third-party code.

Chris obliged:

I can’t speak for the industry, but I have a guess. Third-party code (like the referenced Bootstrap and React) have a history of smoothing over significant cross-browser issues and providing better-than-browser ergonomic APIs. jQuery was created to smooth over cross-browser JavaScript problems. That’s trust.

Very true! jQuery is the canonical example of a library smoothing over the bumpy landscape of browser compatibilities. But jQuery is also the canonical example of a library we no longer need because the browsers have caught up …and those browsers support standards directly influenced by jQuery. That’s a library success story!

Charles Harries takes on my question in his post Libraries over browser features:

I think this perspective of trust has been hammered into developers over the past maybe like 5 years of JavaScript development based almost exclusively on inequality of browser feature support. Things are looking good in 2022; but as recently as 2019, 4 of the 5 top web developer needs had to do with browser compatibility.

Browser compatibility is one of the underlying promises that libraries—especially the big ones that Jeremy references, like React and Bootstrap—make to developers.

So again, it’s browser incompatibilities that made libraries attractive.

Jim Nielsen responds with the same message in his post Trusting Browsers:

We distrust the browser because we’ve been trained to. Years of fighting browser deficiencies where libraries filled the gaps. Browser enemy; library friend.

For example: jQuery did wonders to normalize working across browsers. Write code once, run it in any browser — confidently.

Three for three. My question has been answered: people gravitated towards libraries because browsers had inconsistent implementations.

I’m deliberately using the past tense there. I think Jim is onto something when he says that we’ve been trained not to trust browsers to have parity when it comes to supporting standards. But that has changed.

Charles again:

This approach isn’t a sustainable practice, and I’m trying to do as little of it as I can. Jeremy is right to be suspicious of third-party code. Cross-browser compatibility has gotten a lot better, and campaigns like Interop 2022 are doing a lot to reduce the burden. It’s getting better, but the exasperated I-just-want-it-to-work mindset is tough to uninstall.

I agree. Inertia is a powerful force. No matter how good cross-browser compatibility gets, it’s going to take a long time for developers to shed their suspicion.

Jim is glass-half-full kind of guy:

I’m optimistic that trust in browser-native features and APIs is being restored.

He also points to a very sensible mindset when it comes to third-party libraries and frameworks:

In this sense, third-party code and abstractions can be wonderful polyfills for the web platform. The idea being that the default posture should be: leverage as much of the web platform as possible, then where there are gaps to creating great user experiences, fill them in with exploratory library or framework features (features which, conceivably, could one day become native in browsers).

Yes! A kind of progressive enhancement approach to using third-party code makes a lot of sense. I’ve always maintained that you should treat libraries and frameworks like cattle, not pets. Don’t get too attached. If the library is solving a genuine need, it will be replaced by stable web standards in browsers (again, see jQuery).

I think that third-party libraries and frameworks work best as polyfills. But the whole point of polyfills is that you only use them when the browsers don’t supply features natively (and you also go back and remove the polyfill later when browsers do support the feature). But that’s not how people are using libraries and frameworks today. Developers are reaching for them by default instead of treating them as a last resort.

I like Jim’s proposed design princple:

Where available, default to browser-native features over third party code, abstractions, or idioms.

(P.S. It’s kind of lovely to see this kind of thoughtful blog-to-blog conversation happening. Right at a time when Twitter is about to go down the tubes, this is a demonstration of an actual public square with more nuanced discussion. Make your own website and join the conversation!)

Trust

I’ve noticed a strange mindset amongst front-end/full-stack developers. At least it seems strange to me. But maybe I’m the one with the strange mindset and everyone else knows something I don’t.

It’s to do with trust and suspicion.

I’ve made no secret of the fact that I’m suspicious of third-party code and dependencies in general. Every dependency you add to a project is one more potential single point of failure. You have to trust that the strangers who wrote that code knew what they were doing. I’m still somewhat flabbergasted that developers regularly add dependencies—via npm or yarn or whatever—that then pull in even more dependencies, all while assuming good faith and competence on the part of every person involved.

It’s a touching expression of faith in your fellow humans, but I’m not keen on the idea of faith-based development.

I’m much more trusting of native browser features—HTML elements, CSS features, and JavaScript APIs. They’re not always perfect, but a lot of thought goes into their development. By the time they land in browsers, a whole lot of smart people have kicked the tyres and considered many different angles. As a bonus, I don’t need to install them. Even better, end users don’t need to install them.

And yet, the mindset I’ve noticed is that many developers are suspicious of browser features but trusting of third-party libraries.

When I write and talk about using service workers, I often come across scepticism from developers about writing the service worker code. “Is there a library I can use?” they ask. “Well, yes” I reply, “but then you’ve got to understand the library, and the time it takes you to do that could be spent understanding the native code.” So even though a library might not offer any new functionality—just a different idion—many developers are more likely to trust the third-party library than they are to trust the underlying code that the third-party library is abstracting!

Developers are more likely to trust, say, Bootstrap than they are to trust CSS grid or custom properties. Developers are more likely to trust React than they are to trust web components.

On the one hand, I get it. Bootstrap and React are very popular. That popularity speaks volumes. If lots of people use a technology, it must be a safe bet, right?

But if we’re talking about popularity, every single browser today ships with support for features like grid, custom properties, service workers and web components. No third-party framework can even come close to that install base.

And the fact that these technologies have shipped in stable browsers means they’re vetted. They’ve been through a rigourous testing phase. They’ve effectively got a seal of approval from each individual browser maker. To me, that seems like a much bigger signal of trustworthiness than the popularity of a third-party library or framework.

So I’m kind of confused by this prevalent mindset of trusting third-party code more than built-in browser features.

Is it because of the job market? When recruiters are looking for developers, their laundry list is usually third-party technologies: React, Vue, Bootstrap, etc. It’s rare to find a job ad that lists native browser technologies: flexbox, grid, service workers, web components.

I would love it if someone could explain why they avoid native browser features but use third-party code.

Until then, I shall remain perplexed.

Inertia

When I’ve spoken in the past about evaluating technology, I’ve mentioned two categories of tools for web development. I still don’t know quite what to call these categories. Internal and external? Developer-facing and user-facing?

The first category covers things like build tools, version control, transpilers, pre-processers, and linters. These are tools that live on your machine—or on the server—taking what you’ve written and transforming it into the raw materials of the web: HTML, CSS, and JavaScript.

The second category of tools are those that are made of the raw materials of the web: CSS frameworks and JavaScript libraries.

I think the criteria for evaluating these different kinds of tools should be very different.

For the first category, developer-facing tools, use whatever you want. Use whatever makes sense to you and your team. Use whatever’s effective for you.

But for the second category, user-facing tools, that attitude is harmful. If you make users download a CSS or JavaScript framework in order to benefit your workflow, then you’re making users pay a tax for your developer convenience. Instead, I firmly believe that user-facing tools should provide some direct benefit to end users.

When I’ve asked developers in the past why they’ve chosen to use a particular JavaScript framework, they’ve been able to give me plenty of good answers. But all of those answers involved the benefit to their developer workflow—efficiency, consistency, and so on. That would be absolutely fine if we were talking about the first category of tools, developer-facing tools. But those answers don’t hold up for the second category of tools, user-facing tools.

If a user-facing tool is only providing a developer benefit, is there any way to turn it into a developer-facing tool?

That’s very much the philosophy of Svelte. You can compare Svelte to other JavaScript frameworks like React and Vue but you’d be missing the most important aspect of Svelte: it is, by design, in that first category of tools—developer-facing tools:

Svelte takes a different approach from other frontend frameworks by doing as much as it can at the build step—when the code is initially compiled—rather than running client-side. In fact, if you want to get technical, Svelte isn’t really a JavaScript framework at all, as much as it is a compiler.

You install it on your machine, you write your code in Svelte, but what it spits out at the other end is HTML, CSS, and JavaScript. Unlike Vue or React, you don’t ship the library to end users.

In my opinion, this is an excellent design decision.

I know there are ways of getting React to behave more like a category one tool, but it is most definitely not the default behaviour. And default behaviour really, really matters. For React, the default behaviour is to assume all the code you write—and the tool you use to write it—will be sent over the wire to end users. For Svelte, the default behaviour is the exact opposite.

I’m sure you can find a way to get Svelte to send too much JavaScript to end users, but you’d be fighting against the grain of the tool. With React, you have to fight against the grain of the tool in order to not send too much JavaScript to end users.

But much as I love Svelte’s approach, I think it’s got its work cut out for it. It faces a formidable foe: inertia.

If you’re starting a greenfield project and you’re choosing a JavaScript framework, then Svelte is very appealing indeed. But how often do you get to start a greenfield project?

React has become so ubiquitous in the front-end development community that it’s often an unquestioned default choice for every project. It feels like enterprise software at this point. No one ever got fired for choosing React. Whether it’s appropriate or not becomes almost irrelevant. In much the same way that everyone is on Facebook because everyone is on Facebook, everyone uses React because everyone uses React.

That’s one of its biggest selling points to managers. If you’ve settled on React as your framework of choice, then hiring gets a lot easier: “If you want to work here, you need to know React.”

The same logic applies from the other side. If you’re starting out in web development, and you see that so many companies have settled on using React as their framework of choice, then it’s an absolute no-brainer: “if I want to work anywhere, I need to know React.”

This then creates a positive feedback loop. Everyone knows React because everyone is hiring React developers because everyone knows React because everyone is hiring React developers because…

At no point is there time to stop and consider if there’s a tool—like Svelte, for example—that would be less harmful for end users.

This is where I think Astro might have the edge over Svelte.

Astro has the same philosophy as Svelte. It’s a developer-facing tool by default. Have a listen to Drew’s interview with Matthew Phillips:

Astro does not add any JavaScript by default. You can add your own script tags obviously and you can do anything you can do in HTML, but by default, unlike a lot of the other component-based frameworks, we don’t actually add any JavaScript for you unless you specifically tell us to. And I think that’s one thing that we really got right early.

But crucially, unlike Svelte, Astro allows you to use the same syntax as the incumbent, React. So if you’ve learned React—because that’s what you needed to learn to get a job—you don’t have to learn a new syntax in order to use Astro.

I know you probably can’t take an existing React site and convert it to Astro with the flip of a switch, but at least there’s a clear upgrade path.

Astro reminds me of Sass. Specifically, it reminds me of the .scss syntax. You could take any CSS file, rename its file extension from .css to .scss and it was automatically a valid Sass file. You could start using Sass features incrementally. You didn’t have to rewrite all your style sheets.

Sass also has a .sass syntax. If you take a CSS file and rename it with a .sass file extension, it is not going to work. You need to rewrite all your CSS to use the .sass syntax. Some people used the .sass syntax but the overwhelming majority of people used .scss

I remember talking with Hampton about this and he confirmed the proportions. It was also the reason why one of his creations, Sass, was so popular, but another of his creations, Haml, was not, comparitively speaking—Sass is a superset of CSS but Haml is not a superset of HTML; it’s a completely different syntax.

I’m not saying that Svelte is like Haml and Astro is like Sass. But I do think that Astro has inertia on its side.

Upgrade paths

After I jotted down some quick thoughts last week on the disastrous way that Google Chrome rolled out a breaking change, others have posted more measured and incisive takes:

In fairness to Google, the Chrome team is receiving the brunt of the criticism because they were the first movers. Mozilla and Apple are on baord with making the same breaking change, but Google is taking the lead on this.

As I said in my piece, my issue was less to do with whether confirm(), prompt(), and alert() should be deprecated but more to do with how it was done, and the woeful lack of communication.

Thinking about it some more, I realised that what bothered me was the lack of an upgrade path. Considering that dialog is nowhere near ready for use, it seems awfully cart-before-horse-putting to first remove a feature and then figure out a replacement.

I was chatting to Amber recently and realised that there was a very different example of a feature being deprecated in web browsers…

We were talking about the KeyboardEvent.keycode property. Did you get the memo that it’s deprecated?

But fear not! You can use the KeyboardEvent.code property instead. It’s much nicer to use too. You don’t need to look up a table of numbers to figure out how to refer to a specific key on the keyboard—you use its actual value instead.

So the way that change was communicated was:

Hey, you really shouldn’t use the keycode property. Here’s a better alternative.

But with the more recently change, the communication was more like:

Hey, you really shouldn’t use confirm(), prompt(), or alert(). So go fuck yourself.

Authentication

Two-factor authentication is generally considered A Good Thing™️ when you’re logging in to some online service.

The word “factor” here basically means “kind” so you’re doing two kinds of authentication. Typical factors are:

  • Something you know (like a password),
  • Something you have (like a phone or a USB key),
  • Something you are (biometric Black Mirror shit).

Asking for a password and an email address isn’t two-factor authentication. They’re two pieces of identification, but they’re the same kind (something you know). Same goes for supplying your fingerprint and your face: two pieces of information, but of the same kind (something you are).

None of these kinds of authentication are foolproof. All of them can change. All of them can be spoofed. But when you combine factors, it gets a lot harder for an attacker to breach both kinds of authentication.

The most common kind of authentication on the web is password-based (something you know). When a second factor is added, it’s often connected to your phone (something you have).

Every security bod I’ve talked to recommends using an authenticator app for this if that option is available. Otherwise there’s SMS—short message service, or text message to most folks—but SMS has a weakness. Because it’s tied to a phone number, technically you’re only proving that you have access to a SIM (subscriber identity module), not a specific phone. In the US in particular, it’s all too easy for an attacker to use social engineering to get a number transferred to a different SIM card.

Still, authenticating with SMS is an option as a second factor of authentication. When you first sign up to a service, as well as providing the first-factor details (a password and a username or email address), you also verify your phone number. Then when you subsequently attempt to log in, you input your password and on the next screen you’re told to input a string that’s been sent by text message to your phone number (I say “string” but it’s usually a string of numbers).

There’s an inevitable friction for the user here. But then, there’s a fundamental tension between security and user experience.

In the world of security, vigilance is the watchword. Users need to be aware of their surroundings. Is this web page being served from the right domain? Is this email coming from the right address? Friction is an ally.

But in the world of user experience, the opposite is true. “Don’t make me think” is the rallying cry. Friction is an enemy.

With SMS authentication, the user has to manually copy the numbers from the text message (received in a messaging app) into a form on a website (in a different app—a web browser). But if the messaging app and the browser are on the same device, it’s possible to improve the user experience without sacrificing security.

If you’re building a form that accepts a passcode sent via SMS, you can use the autocomplete attribute with a value of “one-time-code”. For a six-digit passcode, your input element might look something like this:

<input type="text" maxlength="6" inputmode="numeric" autocomplete="one-time-code">

With one small addition to one HTML element, you’ve saved users some tedious drudgery.

There’s one more thing you can do to improve security, but it’s not something you add to the HTML. It’s something you add to the text message itself.

Let’s say your website is example.com and the text message you send reads:

Your one-time passcode is 123456.

Add this to the end of the text message:

@example.com #123456

So the full message reads:

Your one-time passcode is 123456.

@example.com #123456

The first line is for humans. The second line is for machines. Using the @ symbol, you’re telling the device to only pre-fill the passcode for URLs on the domain example.com. Using the # symbol, you’re telling the device the value of the passcode. Combine this with autocomplete="one-time-code" in your form and the user shouldn’t have to lift a finger.

I’m fascinated by these kind of emergent conventions in text messages. Remember that the @ symbol and # symbol in Twitter messages weren’t ideas from Twitter—they were conventions that users started and the service then adopted.

It’s a bit different with the one-time code convention as there is a specification brewing from representatives of both Google and Apple.

Tess is leading from the Apple side and she’s got another iron in the fire to make security and user experience play nicely together using the convention of the /.well-known directory on web servers.

You can add a URL for /.well-known/change-password which redirects to the form a user would use to update their password. Browsers and password managers can then use this information if they need to prompt a user to update their password after a breach. I’ve added this to The Session.

Oh, and on that page where users can update their password, the autocomplete attribute is your friend again:

<input type="password" autocomplete="new-password">

If you want them to enter their current password first, use this:

<input type="password" autocomplete="current-password">

All of the things I’ve mentioned—the autocomplete attribute, origin-bound one-time codes in text messages, and a well-known URL for changing passwords—have good browser support. But even if they were only supported in one browser, they’d still be worth adding. These additions do absolutely no harm to browsers that don’t yet support them. That’s progressive enhancement.

Web Audio API weirdness on iOS

I told you about how I’m using the Web Audio API on The Session to generate synthesised audio of each tune setting. I also said:

Except for some weirdness on iOS that I had to fix.

Here’s that weirdness…

Let me start by saying that this isn’t anything to do with requiring a user interaction (the Web Audio API insists on some kind of user interaction to prevent developers from having auto-playing sound on websites). All of my code related to the Web Audio API is inside a click event handler. This is a different kind of weirdness.

First of all, I noticed that if you pressed play on the audio player when your iOS device is on mute, then you don’t hear any audio. Seems logical, right? Except if using the same device, still set to mute, you press play on a video or audio element, the sound plays just fine. You can confirm this by going to Huffduffer and pressing play on any of the audio elements there, even when your iOS device is set on mute.

So it seems that iOS has different criteria for the Web Audio API than it does for audio or video. Except it isn’t quite that straightforward.

On some pages of The Session, as well as the audio player for tunes (using the Web Audio API) there are also embedded YouTube videos (using the video element). Press play on the audio player; no sound. Press play on the YouTube video; you get sound. Now go back to the audio player and suddenly you do get sound!

It’s almost like playing a video or audio element “kicks” the browser into realising it should be playing the sound from the Web Audio API too.

This was happening on iOS devices set to mute, but I was also getting reports of it happening on devices with the sound on. But it’s that annoyingly intermittent kind of bug that’s really hard to reproduce consistently. Sometimes the sound doesn’t play. Sometimes it does.

Following my theory that the browser needs a “kick” to get into the right frame of mind for the Web Audio API, I resorted to a messy little hack.

In the event handler for the audio player, I generate the “kick” by playing a second of silence using the JavaScript equivalent of the audio element:

var audio = new Audio('1-second-of-silence.mp3');
audio.play();

I’m not proud of that. It’s so hacky that I’ve even wrapped the code in some user-agent sniffing on the server, and I never do user-agent sniffing!

Still, if you ever find yourself getting weird but inconsistent behaviour on iOS using the Web Audio API, this nasty little hack could help.

Update: Time to remove this workaround. Mobile Safari has been updated.