Journal tags: cat

67

Directory enquiries

I was having a discussion with some of my peers a little while back. We were collectively commenting on the state of education and documentation for front-end development.

A lot of the old stalwarts have fallen by the wayside of late. CSS Tricks hasn’t been the same since it got bought out by Digital Ocean. A List Apart goes through fallow periods. Even the Mozilla Developer Network is looking to squander its trust by adding inaccurate “content” generated by a large language model.

The most obvious solution is to start up a brand new resource for front-end developers. But there are two probems with that:

  1. It’s really, really, really hard work, and
  2. It feels a bit 927.

I actually think there are plenty of good articles and resources on front-end development being published. But they’re not being published in any one specific place. People are publishing them on their own websites.

Ahmed, Josh, Stephanie, Andy, Lea, Rachel, Robin, Michelle …I could go on, but you get the picture.

All this wonderful stuff is distributed across the web. If you have a well-stocked RSS reader, you’re all set. But if you’re new to front-end development, how do you know where to find this stuff? I don’t think you can rely on search, unless you have a taste for slop.

I think the solution lies not with some hand-wavey “AI” algorithm that burns a forest for every query. I think the solution lies with human curation.

I take inspiration from Phil’s fantastic project, ooh.directory. Imagine taking that idea of categorisation and applying it to front-end dev resources.

Whether it’s a post on web.dev, Smashing Magazine, or someone’s personal site, it could be included and categorised appropriately.

Now, there would still be a lot of work involved, especially in listing and categorising the articles that are already out there, but it wouldn’t be nearly as much work as trying to create those articles from scratch.

I don’t know what the categories should be. Does it make sense to have top-level categories for HTML, CSS, and JavaScript, with sub-directories within them? Or does it make more sense to categorise by topics like accessibility, animation, and so on?

And this being the web, there’s no reason why one article couldn’t be tagged to simultaneously live in multiple categories.

There’s plenty of meaty information architecture work to be done. And there’d be no shortage of ongoing work to handle new submissions.

A stretch goal could be the creation of “playlists” of hand-picked articles. “Want to get started with CSS grid layout? Read that article over there, watch this YouTube video, and study this page on MDN.”

What do you think? Does this one-stop shop of hyperlinks sound like it would be useful? Does it sound feasible?

I’m just throwing this out there. I’d love it if someone were to run with it.

Filters

My phone rang today. I didn’t recognise the number so although I pressed the big button to answer the call, I didn’t say anything.

I didn’t say anything because usually when I get a call from a number I don’t know, it’s some automated spam. If I say nothing, the spam voice doesn’t activate.

But sometimes it’s not a spam call. Sometimes after a few seconds of silence a human at the other end of the call will say “Hello?” in an uncertain tone. That’s the point when I respond with a cheery “Hello!” of my own and feel bad for making this person endure those awkward seconds of silence.

Those spam calls have made me so suspicious that real people end up paying the price. False positives caught in my spam-detection filter.

Now it’s happening on the web.

I wrote about how Google search, Bing, and Mozilla Developer network are squandering trust:

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

But it’s not just limited to specific companies. I’ve noticed more and more suspicion related to any online activity.

I’ve seen members of a community site jump to the conclusion that a new member’s pattern of behaviour was a sure sign that this was a spambot. But it could just as easily have been the behaviour of someone who isn’t neurotypical or who doesn’t speak English as their first language.

Jessica was looking at some pictures on an AirBnB listing recently and found herself examining some photos that seemed a little too good to be true, questioning whether they were in fact output by some generative tool.

Every email that lands in my inbox is like a little mini Turing test. Did a human write this?

Our guard is up. Our filters are activated. Our default mode is suspicion.

This is most apparent with web search. We’ve always needed to filter search results through our own personal lenses, but now it’s like playing whack-a-mole. First we have to find workarounds for avoiding slop, and then when we click through to a web page, we have to evaluate whether’s it’s been generated by some SEO spammer making full use of the new breed of content-production tools.

There’s been a lot of hand-wringing about how this could spell doom for the web. I don’t think that’s necessarily true. It might well spell doom for web search, but I’m okay with that.

Back before its enshittification—an enshittification that started even before all the recent AI slop—Google solved the problem of accurate web searching with its PageRank algorithm. Before that, the only way to get to trusted information was to rely on humans.

Humans made directories like Yahoo! or DMOZ where they categorised links. Humans wrote blog posts where they linked to something that they, a human, vouched for as being genuinely interesting.

There was life before Google search. There will be life after Google search.

Look, there’s even a new directory devoted to cataloging blogs: websites made by humans. Life finds a way.

All of the spam and slop that’s making us so suspicious may end up giving us a new appreciation for human curation.

It wouldn’t be a straightforward transition to move away from search. It would be uncomfortable. It would require behaviour change. People don’t like change. But when needs must, people adapt.

The first bit of behaviour change might be a rediscovery of bookmarks. It used to be that when you found a source you trusted, you bookmarked it. Browsers still have bookmarking functionality but most people rely on search. Maybe it’s time for a bookmarking revival.

A step up from that would be using a feed reader. In many ways, a feed reader is a collection of bookmarks, but all of the bookmarks get polled regularly to see if there are any updates. I love using my feed reader. Everything I’ve subscribed to in there is made by humans.

The ultimate bookmark is an icon on the homescreen of your phone or in the dock of your desktop device. A human source you trust so much that you want it to be as accessible as any app.

Right now the discovery mechanism for that is woeful. I really want that to change. I want a web that empowers people to connect with other people they trust, without any intermediary gatekeepers.

The evangelists of large language models (who may coincidentally have invested heavily in the technology) like to proclaim that a slop-filled future is inevitable, as though we have no choice, as though we must simply accept enshittification as though it were a force of nature.

But we can always walk away.

InstAI

If you use Instagram, there may be a message buried in your notifications. It begins:

We’re getting ready to expand our AI at Meta experiences to your region.

Fuck that. Here’s the important bit:

To help bring these experiences to you, we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta. This means that you have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied going forwards.

Follow that link and fill in the form. For the field labelled “Please tell us how this processing impacts you” I wrote:

It’s fucking rude.

That did the trick. I got an email saying:

We’ve reviewed your request and will honor your objection.

Mind you, there’s still this:

We may still process information about you to develop and improve AI at Meta, even if you object or don’t use our products and services.

Labels

I love libraries. I think they’re one of humanity’s greatest inventions.

My local library here in Brighton is terrific. It’s well-stocked, it’s got a welcoming atmosphere, and it’s in a great location.

But it has an information architecture problem.

Like most libraries, it’s using the Dewey Decimal system. It’s not a great system, but every classification system is going to have flaws—wherever you draw boundaries, there will be disagreement.

The Dewey Decimal class of 900 is for history and geography. Within that class, those 100 numbers (900 to 999) are further subdivded in groups of 10. For example, everything from 940 to 949 is for the history of Europe.

Dewey Decimal number 941 is for the history of the British Isles. The term “British Isles” is a geographical designation. It’s not a good geographical designation, but technically it’s not a political term. So it’s actually pretty smart to use a geographical rather than a political term for categorisation: geology moves a lot slower than politics.

But the Brighton Library is using the wrong label for their shelves. Everything under 941 is labelled “British History.”

The island of Ireland is part of the British Isles.

The Republic of Ireland is most definitely not part of Britain.

Seeing books about the history of Ireland, including post-colonial history, on a shelf labelled “British History” is …not good. Frankly, it’s offensive.

(I mentioned this situation to an English friend of mine, who said “Well, Ireland was once part of the British Empire”, to which I responded that all the books in the library about India should also be filed under “British History” by that logic.)

Just to be clear, I’m not saying there’s a problem with the library using the Dewey Decimal system. I’m saying they’re technically not using the system. They’ve deviated from the system’s labels by treating “History of the British Isles” and “British History” as synonymous.

I spoke to the library manager. They told me to write an email. I’ve written an email. We’ll see what happens.

You might think I’m being overly pedantic. That’s fair. But the fact this is happening in a library in England adds to the problem. It’s not just technically incorrect, it’s culturally clueless.

Mind you, I have noticed that quite a few English people have a somewhat fuzzy idea about the Republic of Ireland. Like, they understand it’s a different country, but they think it’s a different country in the way that Scotland is a different country, or Wales is a different country. They don’t seem to grasp that Ireland is a different country like France is a different country or Germany is a different country.

It would be charming if not for, y’know, those centuries of subjugation, exploitation, and forced starvation.

British history.

Update: They fixed it!

Federation syndication

I’m quite sure this is of no interest to anyone but me, but I finally managed to fix a longstanding weird issue with my website.

I realise that me telling you about a bug specific to my website is like me telling you about a dream I had last night—fascinating for me; incredibly dull for you.

For some reason, my site was being brought to its knees anytime I syndicated a note to Mastodon. I rolled up my sleeves to try to figure out what the problem could be. I was fairly certain the problem was with my code—I’m not much of a back-end programmer.

My tech stack is classic LAMP: Linux, Apache, MySQL and PHP. When I post a note, it gets saved to my database. Then I make a curl request to the Mastodon API to syndicate the post over there. That’s when my CPU starts climbing and my server gets all “bad gateway!” on me.

After spending far too long pulling apart my PHP and curl code, I had to come to the conclusion that I was doing nothing wrong there.

I started watching which processes were making the server fall over. It was MySQL. That seemed odd, because I’m not doing anything too crazy with my database reads.

Then I realised that the problem wasn’t any particular query. The problem was volume. But it only happened when I posted a note to Mastodon.

That’s when I had a lightbulb moment about how the fediverse works.

When I post a note to Mastodon, it includes a link back to the original note to my site. At this point Mastodon does its federation magic and starts spreading the post to all the instances subscribed to my account. And every single one of them follows the link back to the note on my site …all at the same time.

This isn’t a problem when I syndicate my blog posts, because I’ve got a caching mechanism in place for those. I didn’t think I’d need any caching for little ol’ notes. I was wrong.

A simple solution would be not to include the link back to the original note. But I like the reminder that what you see on Mastodon is just a copy. So now I’ve got the same caching mechanism for my notes as I do for my journal (and I did my links while I was at it). Everything is hunky-dory. I can syndicate to Mastodon with impunity.

See? I told you it would only be of interest to me. Although I guess there’s a lesson here. Something something caching.

Switching costs

Cory has published the transcript of his talk at the Transmediale festival in Berlin. It’s all about enshittification, and what we can collectively do to reverse it.

He succinctly describes the process of enshittification like this:

First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

More importantly, he describes the checks and balances that keep enshittification from happening, all of which have been dismantled over time: competition, regulation, self-help, and workers.

One of the factors that allows enshittification to proceed is a high switching cost:

Switching costs are everything you have to give up when you leave a product or service. In Facebook’s case, it was all the friends there that you followed and who followed you. In theory, you could have all just left for somewhere else; in practice, you were hamstrung by the collective action problem.

It’s hard to get lots of people to do the same thing at the same time.

We’ve seen this play out over at Twitter, where people I used to respect are still posting there as if it hasn’t become a cesspool of far-right racist misogyny reflecting its new owner’s values. But for a significant amount of people—including myself and anyone with a modicum of decency—the switching cost wasn’t enough to stop us getting the hell out of there. Echoing Robin’s observation, Cory says:

…the difference between “I hate this service but I can’t bring myself to quit it,” and “Jesus Christ, why did I wait so long to quit? Get me the hell out of here!” is razor thin.

If users can’t leave because everyone else is staying, when when everyone starts to leave, there’s no reason not to go, too.

That’s terminal enshittification, the phase when a platform becomes a pile of shit. This phase is usually accompanied by panic, which tech bros euphemistically call ‘pivoting.’

Anyway, I bring this up because I recently read something else about switching costs, but in a very different context. Jake Lazaroff was talking about JavaScript frameworks:

I want to talk about one specific weakness of JavaScript frameworks: interoperability, or the lack thereof. Almost without exception, each framework can only render components written for that framework specifically.

As a result, the JavaScript community tends to fragment itself along framework lines. Switching frameworks has a high cost, especially when moving to a less popular one; it means leaving most of the third-party ecosystem behind.

That switching cost stunts framework innovation by heavily favoring incumbents with large ecosystems.

Sounds a lot like what Cory was describing with incumbents like Google, Facebook, Twitter, and Amazon.

And let’s not kid ourselves, when we’re talking about incumbent client-side JavaScript frameworks, we might mention Vue or some other contender, but really we’re talking about React.

React has massive switching costs. For over a decade now, companies have been hiring developers based on one criterion: do they know React?

“An expert in CSS you say? No thanks.”

“Proficient in vanilla JavaScript? Don’t call us, we’ll call you.”

Heck, if I were advising someone who was looking for a job in front-end development (as opposed to actually being good at front-end development; two different things), I’d tell them to learn React.

Just as everyone ended up on Facebook because everyone was on Facebook, everyone ended up using React because everyone was using React.

You can probably see where I’m going with this: the inevitable enshittification of React.

Just to be clear, I’m not talking about React getting shittier in terms of what it does. It’s always been a shitty technology for end users:

React is legacy tech from 2013 when browsers didn’t have template strings or a BFCache.

No, I’m talking about the enshittification of the developer experience …the developer experience being the thing that React supposedly has going for it, though as Simon points out, the developer experience has always been pretty crap:

Whether on purpose or not, React took advantage of this situation by continuously delivering or promising to deliver changes to the library, with a brand new API being released every 12 to 18 months. Those new APIs and the breaking changes they introduce are the new shiny objects you can’t help but chase. You spend multiple cycles learning the new API and upgrading your application. It sure feels like you are doing something, but in reality, you are only treading water.

Well, it seems like the enshittification of the React ecosystem is well underway. Cassidy is kind of annoyed at React. Tom is increasingly miffed about the state of React releases, and Matteo asks React, where are you going?

Personally, I would love it if more people were complaining about the dreadful user experience inflicted by client-side React. Instead the complaints are universally about the developer experience.

I guess doing the right thing for the wrong reasons is fine. It’s just a little dispiriting.

I sometimes feel like I’m living that old joke, where I’m the one in the restaurant saying “the food here is terrible!” and most of my peers are saying “I know! And such small portions!”

adactio.com on Mastodon

I’ve been on Mastodon since 2017, but now my website is on there too. I’m @adactio@mastodon.social. My website is @adactio.com@adactio.com—search for adactio.com in your Mastodon client of choice.

What’s the difference? Well, with my mastodon.social account, I’m syndicating stuff—most of my notes, and all of my journal—and including a link back to the original source here on my site. With the adactio.com account, it is the original source.

I thought about migrating over to my adactio.com account from my mastodon.social account, but I actually like having a separate profile. I browse Mastodon a lot more than I post, and browsing is a lot easier to do with a regular account.

If you’d like your website to be available on Mastodon, Bridgy Fed is the magic tool that makes it available. You’ll need to be able to send webmentions, and you’ll need to configure some .well-known directives. If you’ve you’ve already got webmention-sending set up, it’s all quite straightforward (though you will also need to update your HTML to include a link back to fed.brid.gy in each entry you want syndicated).

For some reason the syndication didn’t seem to be working at first, but then when I followed @adactio.com@adactio.com from my @adactio@mastodon.social account, it started working. Maybe there needs to be at least one follower.

Also, my links don’t seem to be showing up on Mastodon even though it looks like everything is posting okay. Not sure what that is about.

Anyway, if you want to follow me on Mastodon, you now have a choice. There’s me, @adactio@mastodon.social, or there’s my website, @adactio.com@adactio.com.

The syndicate

Social networks come and social networks go.

Right now, there’s a whole bunch of social networks coming (Blewski, Freds, Mastication) and one big one going, thanks to Elongate.

Me? I watch all of this unfold like Doctor Manhattan on Mars. I have no great connection to any of these places. They’re all just syndication endpoints to me.

I used to have a checkbox in my posting interface that said “Twitter”. If I wanted to add a copy of one of my notes to Twitter, I’d enable that toggle.

I have, of course, now removed that checkbox. Twitter is dead to me (and it should be dead to you too).

I used to have another checkbox next to that one that said “Flickr”. If I was adding a photo to one of my notes, I could toggle that to send a copy to my Flickr account.

Alas, that no longer works. Flickr only allows you to post 1000 photos before requiring a pro account. Fair enough. I’ve actually posted 20 times that amount since 2005, but I let my pro membership lapse a while back.

So now I’ve removed the “Flickr” checkbox too.

Instead I’ve now got a checkbox labelled “Mastodon” that sends a copy of a note to my Mastodon account.

When I publish a blog post like the one you’re reading now here on my journal, there’s yet another checkbox that says “Medium”. Toggling that checkbox sends a copy of my post to my page on Ev’s blog.

At least it used to. At some point that stopped working too. I was going to start debugging my code, but when I went to the documentation for the Medium API, I saw this:

This repository has been archived by the owner on Mar 2, 2023. It is now read-only.

I guessed I missed the memo. I guess Medium also missed the memo, because developers.medium.com is still live. It proudly proclaims:

Medium’s Publishing API makes it easy for you to plug into the Medium network, create your content on Medium from anywhere you write, and expand your audience and your influence.

Not a word of that is accurate.

That page also has a link to the Medium engineering blog. Surely the announcement of the API deprecation would be published there?

Crickets.

Moving on…

I have an account on Bluesky. I don’t know why.

I was idly wondering about sending copies of my notes there when I came across a straightforward solution: micro.blog.

That’s yet another place where I have an account. They make syndication very straightfoward. You can go to your account and point to a feed from your own website.

That’s it. Syndication enabled.

It gets better. Micro.blog can also cross-post to other services. One of those services is Bluesky. I gave permission to micro.blog to syndicate to Bluesky so now my notes show up there too.

It’s like dominoes falling: I post something on my website which updates my RSS feed which gets picked up by micro.blog which passes it on to Bluesky.

I noticed that one of the other services that micro.blog can post to is Medium. Hmmm …would that still work given the abandonment of the API?

I gave permission to micro.blog to cross-post to Medium when my feed of blog posts is updated. It seems to have worked!

We’ll see how long it lasts. We’ll see how long any of them last. Today’s social media darlings are tomorrow’s Friendster and MySpace.

When the current crop of services wither and die, my own website will still remain in full bloom.

Permission

Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?

Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.

What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?

Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.

In short order, search came to rule the web. And Google came to rule search.

The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.

Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.

That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).

Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.

With AI, tech has broken the web’s social contract:

Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.

As Chris puts it:

Me, I just think it’s fuckin’ rude.

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

Ben proposes an update to robots.txt that would allow us to specify licensing information:

Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.

Like crawl my site only to provide search results not train your LLM.

It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.

Or do they?

There is still the nuclear option in robots.txt:

User-agent: Googlebot
Disallow: /

That’s what Vasilis is doing:

I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.

The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.

I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.

And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”

The situation with Google reminds me of what Robin said about Twitter:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

We can rescind our invitation to Google.

Sunday

Today was a good day. The weather was beautiful.

Jessica and I did a little bit of work in the garden—nothing too sweaty. Then Jessica cut my hair. It looks good. And it feels good to have my neck freed up.

We went for a Sunday roast at the nearest pub, which does a most excellent carvery. It was tasty and plentiful so after strolling home, I wanted to do nothing more than sit around.

I sat outside in the back garden under the dappled shade offered by the overhanging trees. I had a good book. I had my mandolin to hand. I’d reach for it occassionally to play a tune or two.

Coco the cat—not our cat—sat nearby, stretching her paws out lazily in the warm muggy air.

It was a good day.

Push

Push notifications are finally arriving on iOS—hallelujah! Like I said last year, this is my number one wish for the iPhone, though not because I personally ever plan to use the feature:

When I’m evangelising the benefits of building on the open web instead of making separate iOS and Android apps, I inevitably get asked about notifications. As long as mobile Safari doesn’t support them—even though desktop Safari does—I’m somewhat stumped. There’s no polyfill for this feature other than building an entire native app, which is a bit extreme as polyfills go.

With push notifications in mobile Safari, the arguments for making proprietary apps get weaker. That’s good.

The announcement post is a bit weird though. It never uses the phrase “progressive web apps”, even though clearly the entire article is all about progressive web apps. I don’t know if this down to Not-Invented-Here syndrome by the Apple/Webkit team, or because of genuine legal concerns around using the phrase.

Instead, there are repeated references to “Home Screen apps”. This distinction makes some sense though. In order to use web push on iOS, your website needs to be added to the home screen.

I think that would be fair enough, if it weren’t for the fact that adding a website to the home screen remains such a hidden feature that even power users would be forgiven for not knowing about it. I described the steps here:

  1. Tap the “share” icon. It’s not labelled “share.” It’s a square with an arrow coming out of the top of it.
  2. A drawer pops up. The option to “add to home screen” is nowhere to be seen. You have to pull the drawer up further to see the hidden options.
  3. Now you must find “add to home screen” in the list
  • Copy
  • Add to Reading List
  • Add Bookmark
  • Add to Favourites
  • Find on Page
  • Add to Home Screen
  • Markup
  • Print

As long as this remains the case, we can expect usage of web push on iOS to be vanishingly low. Hardly anyone is going to add a website to their home screen when their web browser makes it so hard.

If you’d like to people to install your progressive web app, you’ll almost certainly need to prompt people to do so. Here’s the page I made on thesession.org with instructions on how to add to home screen. I link to it from the home page of the site.

I wish that pages like that weren’t necessary. It’s not the best user experience. But as long as mobile Safari continues to bury the home screen option, we don’t have much choice but to tackle this ourselves.

One morning in the future

I had a video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.

“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”

There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.

It also made me stop and think about how amazing it was that we were having the call in the first place. I remembered Arthur C. Clarke’s predictions from 1964:

I’m thinking of the incredible breakthrough which has been possible by developments in communications, particularly the transistor and, above all, the communications satellite.

These things will make possible a world in which we can be in instant contact with each other wherever we may be, where we can contact our friends anywhere on Earth even if we don’t know their actual physical location.

It will be possible in that age—perhaps only 50 years from now—for a man to conduct his business from Tahiti or Bali just as well as he could from London.

The casual sexism of assuming that it would be a “man” conducting business hasn’t aged well. And it’s not the communications satellite that enabled my video call, but old-fashioned undersea cables, many in the same locations as their telegraphic antecedents. But still; not bad, Arthur.

After my call, I caught up on some email. There was a new newsletter from Ariel who’s currently in Antarctica.

Just thinking about the fact that I know someone who’s in Antarctica—who sent me a postcard from Antarctica—gave me another rush of feeling like I was living in the future. As I started to read the contents of the latest newsletter, that feeling became even more specific. Doesn’t this sound exactly like something straight out of a late ’80s/early ’90s cyberpunk novel?

Four of my teammates head off hiking towards the mountains to dig holes in the soil in hopes of finding microscopic animals contained within them. I hang back near the survival bags with the remaining teammate and begin unfolding my drone to get a closer look at the glaciers. After filming the textures of the land and ice from multiple angles for 90 minutes, my batteries are spent, my hands are cold and my stomach is growling. I land the drone, fold it up into my bright yellow Pelican case, and pull out an expired granola bar to keep my hunger pangs at bay.

That fediverse feeling

Right now, Twitter feels like Dunkirk beach in May 1940. And look, here comes a plucky armada of web servers running Mastodon instances!

Others have written some guides to getting started on Mastodon:

There are also tools like Twitodon to help you migrate from Twitter to Mastodon.

Getting on board isn’t completely frictionless. Understanding how Mastodon works can be confusing. But then again, so was Twitter fifteen years ago.

Right now, many Mastodon instances are struggling with the influx of new sign-ups. But this is temporary. And actually, it’s also very reminiscent of the early unreliable days of Twitter.

I don’t want to go into the technical details of Mastodon and the fediverse—even though those details are fascinating and impressive. What I’m really struck by is the vibe.

In a nutshell, I’m loving it! It feels …nice.

I was fully expecting Mastodon to be full of meta-discussions about Mastodon, but in the past few weeks I’ve enjoyed people posting about stone circles, astronomy, and—obviously—cats and dogs.

The process of finding people to follow has been slow, but in a good way. I’ve enjoyed seeking people out. It’s been easier to find the techy folks, but I’ve also been finding scientists, journalists, and artists.

On the one hand, the niceness of the experience isn’t down to technical architecture; it’s all about the social norms. On the other hand, those social norms are very much directed by technical decisions. The folks working on the fediverse for the past few years have made very thoughtful design decisions to amplify niceness and discourage nastiness. It’s all very gratifying to experience!

Personally, I’m posting to Mastodon via my own website. As much as I’m really enjoying Mastodon, I still firmly believe that nothing beats having control of your own content on your domain.

But I also totally get that not everyone has the same set of priorities as me. And frankly, it’s unrealistic to expect everyone to have their own domain name.

It’s like there’s a spectrum of ownership. On one end, there’s publishing on your own website. On the other end, there’s publishing on silos like Twitter, Facebook, Medium, Instagram, and MySpace.

Publishing on Mastodon feels much closer to the website end of the spectrum than it does to the silo end of the spectrum. If something bad happens to the Mastodon instance you’re on, you can up and move to a different instance, taking your social graph with you.

In a way, it’s like delegating domain ownership to someone you trust. If you don’t have the time, energy, resources, or interest in having your own domain, but you trust someone who’s running a Mastodon instance, it’s the next best thing to publishing on your own website.

Simon described it well when he said Mastodon is just blogs:

A Mastodon server (often called an instance) is just a shared blog host. Kind of like putting your personal blog in a folder on a domain on shared hosting with some of your friends.

Want to go it alone? You can do that: run your own dedicated Mastodon instance on your own domain.

And rather than compare Mastodon to Twitter, Simon makes a comparison with RSS:

Do you still miss Google Reader, almost a decade after it was shut down? It’s back!

A Mastodon server is a feed reader, shared by everyone who uses that server.

Lots of other folks are feeling the same excitement in the air that I’m getting:

Bastian wrote:

Real conversations. Real people. Interesting content. A feeling of a warm welcoming group. No algorithm to mess around with our timelines. No troll army to destory every tiny bit of peace. Yes, Mastodon is rough around the edges. Many parts are not intuitive. But this roughness somehow added to the positive experience for me.

This could really work!

Brent Simmons wrote:

The web is wide open again, for the first time in what feels like forever.

I concur! Though, like Paul, I love not being beholden to either Twitter or Mastodon:

I love not feeling bound to any particular social network. This website, my website, is the one true home for all the stuff I’ve felt compelled to write down or point a camera at over the years. When a social network disappears, goes out of fashion or becomes inhospitable, I can happily move on with little anguish.

But like I said, I don’t expect everyone to have the time, means, or inclination to do that. Mastodon definitely feels like it shares the same indie web spirit though.

Personally, I recommend experiencing Mastodon through the website rather than a native app. Mastodon instances are progressive web apps so you can add them to your phone’s home screen.

You can find me on Mastodon as @adactio@mastodon.social

I’m not too bothered about what instance I’m on. It really only makes a difference to my local timeline. And if I do end up finding an instance I prefer, then I know that migrating will be quite straightforward, by design. Perhaps I should be on an instance with a focus on front-end development or the indie web. I still haven’t found much of an Irish traditional music community on the fediverse. I’m wondering if maybe I should start a Mastodon instance for that.

While I’m a citizen of mastodon.social, I’m doing my bit by chipping in some money to support it: sponsorship levels on Patreon start at just $1 a month. And while I can’t offer much technical assistance, I opened my first Mastodon pull request with a suggested improvement for the documentation.

I’m really impressed with the quality of the software. It isn’t perfect but considering that it’s an open source project, it’s better than most VC-backed services with more and better-paid staff. As Giles said, comparing it to Twitter:

I’m using Mastodon now and it’s not the same, but it’s not shit either. It’s different. It takes a bit of adjustment. And I’m enjoying it.

Most of all, I love, love, love that Mastodon demonstrates that things can be different. For too long we’ve been told that behavioural advertising was an intrinsic part of being online, that social networks must inevitably be monolithic centralised beasts, that we have to relinquish control to corporations in order to be online. The fediverse is showing us a better way. And this isn’t just a proof of concept either. It’s here now. It’s here to stay, if you want it.

Syndicating to Mastodon

I’ve been contemplating a checkbox. The label for this checkbox reads:

This is a bot account

Let me back up…

In what seems like decades ago, but was in fact just a few weeks, Elon Musk bought Twitter and began burning it to the ground. His admirers insist he’s playing some form of four-dimensional chess, but to the rest of us, his actions are indistinguishable from a spoilt rich kid not understanding what a social network is.

It wasn’t giving me much cause for anguish personally. For the past eight years, I’ve only used Twitter as a syndication endpoint for my own notes. But I understand that’s a very privileged position to be in. Most people on Twitter don’t have the same luxury of independence. It’s genuinely maddening and saddening to see their years of sharing destroyed by one cruel idiot.

Lots of people started moving to Mastodon. I figured I should do the same for my syndicated notes.

At first, I signed up for an account on mastodon.cloud. No particular reason. But that’s where I saw this very insightful post from Anil Dash:

When it came time to reckon with social media’s failings, nobody ran to the “web3” platforms. Nobody asked “can I get paid per message”? Nobody asked about the blockchain. The community of people who’ve been quietly doing this work for years (decades!) ended up being the ones who welcomed everyone over, as always.

I was getting my account all set up and beginning to follow some other folks, when I realised that I actually already had an exisiting account over on mastodon.social. Doh! Turns out that I signed up back in 2017 to kick the tyres, but never did much else because there weren’t many other people around back then. Oh, how times have changed!

Anyway, I thought I had really screwed up by having two accounts but this turned out to be an opportunity to experience some of the thoughtfulness in Mastodon’s design. The process of migrating from one Mastodon account to another—on a completely different instance—was very smooth! It was clear that this wasn’t an afterthought. This is an essential part of the fediverse and the design of the migration flow reflects that.

This gives me enormous peace of mind. If I ever want to switch to a different instance and still keep my network intact, I know it won’t be a problem. Mastodon is like the opposite of the roach-motel mentality that permeates most VC-backed so-called social networks.

As I played around some more—reading, following, exploring—my feelings of fondness only grew stronger. I like this place a lot!

I definitely wanted to syndicate my notes to Mastodon. At first, I implemented a straightforward RSS-to-Mastodon syndication using IFTTT (IF This, Then That), thanks to Matthias’s excellent tutorial.

But that didn’t feel quite right. When I syndicate to Twitter, I make a conscious choice each time. There’s a “Twitter” toggle that I can enable or disable in my posting interface. Mastodon deserved the same level of thoughtfulness.

So I switched off the IFTTT recipe and started exploring the Mastodon API. It’s going to sound like a humblebrag when I tell you that I got cross-posting working in almost no time at all, but that’s not a testament to my coding prowess (I’m really not very good), but rather a testament to the Mastodon API, which was a joy to work with.

  1. On your Mastodon instance, go to /settings/applications.
  2. Click on New Application.
  3. Fill in the details about your website and select write:statuses (and probably write:media) from the Scopes list.
  4. Copy Your access token to use in API calls.
  5. Write some sloppy code (in my case, PHP that uses CURL).

I did hit a wall when it came to posting images. That took me a while to get working, and I couldn’t figure out why. Was it something at Mastodon’s end while it was struggling under the influx of new users? As it turns out, no. It was entirely down to me being an idiot. (You know that situation where you’re working on a problem for ages and you’ve become convinced it’s an extremely gnarly rocket-science problem, but then turns out to be something stupid like a typo? Yeah. That.)

Then there’s the whole question of how to receive replies, likes, and reboosts from Mastodon here on my own site. Luckily, that was super easy, thanks to Brid.gy. One click and I was done. I love Brid.gy!

Take this note, for example. There’s a version on Twitter and a version on Mastodon. The original version on my own site gets responses from both places.

If I’m replying to a response on Twitter, I do not syndicate that to Mastodon.

Likewise, if I’m replying to a response on Mastodon, I do not syndicate that to Twitter.

Oh, one thing worth mentioning: if you’re sending a reply to something on Mastodon using the API, there’s an in_reply_to_id field for you to provide. But you should also include the full @username@instance of the person you’re replying to at the beginning of the message to ensure that it’s displayed as a reply rather than showing up as a regular post. Note the difference between this note on my site and its syndicated version on Mastodon.

Anyway, now I’m posting to Mastodon, but I’m doing it through the the interface of my own website. Which brings me to that checkbox in Mastodon’s profile settings:

This is a bot account

The help text reads:

Signal to others that the account mainly performs automated actions and might not be monitored

If I were doing the automatic cross-posting from RSS, I’d definitely tick that box. But as I’m making a conscious decision whenever I syndicate to Mastodon, I think I’m going to leave that checkbox unticked.

My cross-posting is not automated and I’m very much monitoring my Mastodon account …because I’m enjoying my Mastodon experience more than I’ve enjoyed anything online for quite some time. Highly recommended!

Image previews with the FileReader API

I added a “notes” section to this website eight years ago. I set it up so that notes could be syndicated to Twitter. Ever since then, that’s the only way I post to Twitter.

A few months later I added photos to my notes. Again, this would get syndicated to Twitter.

Something’s bothered me for a long time though. I initially thought that if I posted a photo, then the accompanying text would serve as a decription of the image. It could effectively act as the alt text for the image, I thought. But in practice it didn’t work out that way. The text was often a commentary on the image, which isn’t the same as a description of the contents.

I needed a way to store alt text for images. To make it more complicated, it was possible for one note to have multiple images. So even though a note was one line in my database, I somehow needed a separate string of text with the description of each image in a single note.

I eventually settled on using the file system instead of the database. The images themselves are stored in separate folders, so I figured I could have an accompanying alt.txt file in each folder.

Take this note from yesterday as an example. Different sizes of the image are stored in the folder /images/uploaded/19077. Here’s a small version of the image and here’s the original. In that same folder is the alt text.

This means I’m reading a file every time I need the alt text instead of reading from a database, which probably isn’t the most performant way of doing it, but it seems to be working okay.

Here’s another example:

In order to add the alt text to the image, I needed to update my posting interface. By default it’s a little textarea, followed by a file upload input, followed by a toggle (a checkbox under the hood) to choose whether or not to syndicate the note to Twitter.

The interface now updates automatically as soon as I use that input type="file" to choose any images for the note. Using the FileReader API, I show a preview of the selected images right after the file input.

Here’s the code if you ever need to do something similar. I’ve abstracted it somewhat in that gist—you should be able to drop it into any page that includes input type="file" accept="image/*" and it will automatically generate the previews.

I was pleasantly surprised at how easy this was. The FileReader API worked just as expected without any gotchas. I think I always assumed that this would be quite complex to do because once upon a time, it was quite complex (or impossible) to do. But now it’s wonderfully straightforward. Story of the web.

My own version of the script does a little bit more; it also generates another little textarea right after each image preview, which is where I write the accompanying alt text.

I’ve also updated my server-side script that handles the syndication to Twitter. I’m using the /media/metadata/create method to provide the alt text. But for some reason it’s not working. I can’t figure out why. I’ll keep working on it.

In the meantime, if you’re looking at an image I’ve posted on Twitter and you’re judging me for its lack of alt text, my apologies. But each tweet of mine includes a link back to the original note on this site and you will most definitely find the alt text for the image there.

Web notifications on iOS

I’ve mentioned before that I don’t enable notifications on my phone. Text messages are the only exception. I don’t want to get notified if a new email arrives (I avoid email on my phone completely) and I certainly don’t want some social media app telling me somebody liked or faved something.

But the number one feature I’d like to see in Safari on iOS is web notifications.

It’s not for me personally, see. It’s because it’s the number one reason why people are choosing not to go all in progressive web apps.

Safari on iOS is the last holdout. But that equates to enough marketshare that many companies feel they can’t treat notifications as a progressive enhancement. While I may not agree with that decision myself, I get it.

When I’m evangelising the benefits of building on the open web instead of making separate iOS and Android apps, I inevitably get asked about notifications. As long as mobile Safari doesn’t support them—even though desktop Safari does—I’m somewhat stumped. There’s no polyfill for this feature other than building an entire native app, which is a bit extreme as polyfills go.

And of course, unlike on your Mac, you don’t have the option of using a different browser on your iPhone. As long as mobile Safari doesn’t support web notifications, nothing on iOS can support web notifications.

I’ve got progressive web apps on the home screen of my phone that match their native equivalents feature-for-feature. Twitter. Instagram. They’re really good. In some ways they’re superior to the native apps; the Twitter website is much calmer, and the Instagram website has no advertising. But if I wanted to get notifications from any of those sites, I’d have to keep the native apps installed just for that one feature.

So in the spirit of complaining about web browsers in a productive way, I just want to throw this plea out there: Apple, please support web notifications in mobile Safari!

The good news is that web notifications on iOS might be on their way. Huzzah!

Alas, we’re reliant on Maximiliano’s detective work to even get a glimpse of a future feature like this. Apple has no public roadmap for Safari. There’s this status page on the Webkit blog but it’s incomplete—web notifications don’t appear at all. In any case, WebKit and Safari aren’t the same thing. The only way of knowing if a feature might be implemented in Safari is if it shows up in Safari Technology Preview, at which point it’s already pretty far along.

So while my number one feature request for mobile Safari is web notifications, a close second would be a public roadmap.

It only seems fair. If Apple devrels are asking us developers what features we’d like to see implemented—as they should!—then shouldn’t those same developers also be treated with enough respect to share a roadmap with them? There’s not much point in us asking for features if, unbeknownst to us, that feature is already being worked on.

But, like I said, my number one request remains: web notifications on iOS …please!

A bug with progressive web apps on iOS

Dave recently wrote some good advice about what to do—and what not to do—when it comes to complaining about web browsers. I wrote something on this topic a little while back:

If there’s something about a web browser that you’re not happy with (or, indeed, if there’s something you’re really happy with), take the time to write it down and publish it

To summarise Dave’s advice, avoid conspiracy theories and snark; stick to specifics instead.

It’s very good advice that I should heed (especially the bit about avoiding snark). In that spirit, I’d like to document what I think is a bug on iOS.

I don’t need to name the specific browser, because there is basically only one browser allowed on iOS. That’s not snark; that’s a statement of fact.

This bug involves navigating from a progressive web app that has been installed on your home screen to an external web view.

To illustrate the bug, I’ll use the example of The Session. If you want to recreate the bug, you’ll need to have an account on The Session. Let me know if you want to set up a temporary account—I can take care of deleting it afterwards.

Here are the steps:

  1. Navigate to thesession.org in Safari on an iOS device.
  2. Add the site to your home screen.
  3. Open the installed site from your home screen—it will launch in standalone mode.
  4. Log in with your username and password.
  5. Using the site menu, navigate to the links section of the site.
  6. Click on any external link.
  7. After the external link opens in a web view, tap on “Done” to close the web view.

Expected behaviour: you are returned to the page you were on with no change of state.

Actual behaviour: you are returned to the page you were on but you are logged out.

So the act of visiting an external link in a web view while in a progressive web app in standalone mode seems to cause a loss of cookie-based authentication.

This isn’t permanent. Clicking on any internal link restores the logged-in state.

It is surprising though. My mental model for opening an external link in a web view is that it sits “above” the progressive web app, which remains in stasis “behind” it. But the page must actually be reloading, either when the web view is opened or when the web view is closed. And that reload is behaving like a fetch event without credentials.

Anyway, that’s my bug report. It may already be listed somewhere on the WebKit Bugzilla but I lack the deductive skills to find it. I’m not even sure if that’s the right place for this kind of bug. It might be specific to the operating system rather than the rendering engine.

This isn’t a high priority bug, but it is one of those cumulatively annoying software paper cuts.

Hope this helps!

Accessibility testing

I was doing some accessibility work with a client a little while back. It was mostly giving their site the once-over, highlighting any issues that we could then discuss. It was an audit of sorts.

While I was doing this I started to realise that not all accessibility issues are created equal. I don’t just mean in their severity. I mean that some issues can—and should—be caught early on, while other issues can only be found later.

Take colour contrast. This is something that should be checked before a line of code is written. When designs are being sketched out and then refined in a graphical editor like Figma, that’s the time to check the ratio between background and foreground colours to make sure there’s enough contrast between them. You can catch this kind of thing later on, but by then it’s likely to come with a higher cost—you might have to literally go back to the drawing board. It’s better to find the issue when you’re at the drawing board the first time.

Then there’s the HTML. Most accessibility issues here can be caught before the site goes live. Usually they’re issues of ommission: form fields that don’t have an explicitly associated label element (using the for and id attributes); images that don’t have alt text; pages that don’t have sensible heading levels or landmark regions like main and nav. None of these are particularly onerous to fix and they come with the biggest bang for your buck. If you’ve got sensible forms, sensible headings, alt text on images, and a solid document structure, you’ve already covered the vast majority of accessibility issues with very little overhead. Some of these checks can also be automated: alt text for images; labels for inputs.

Then there’s interactive stuff. If you only use native HTML elements you’re probably in the clear, but chances are you’ve got some bespoke interactivity on your site: a carousel; a mega dropdown for navigation; a tabbed interface. HTML doesn’t give you any of those out of the box so you’d need to make your own using a combination of HTML, CSS, JavaScript and ARIA. There’s plenty of testing you can do before launching—I always ask myself “What would Heydon do?”—but these components really benefit from being tested by real screen reader users.

So if you commission an accessibility audit, you should hope to get feedback that’s mostly in that third category—interactive widgets.

If you get feedback on document structure and other semantic issues with the HTML, you should fix those issues, sure, but you should also see what you can do to stop those issues going live again in the future. Perhaps you can add some steps in the build process. Or maybe it’s more about making sure the devs are aware of these low-hanging fruit. Or perhaps there’s a framework or content management system that’s stopping you from improving your HTML. Then you need to execute a plan for ditching that software.

If you get feedback about colour contrast issues, just fixing the immediate problem isn’t going to address the underlying issue. There’s a process problem, or perhaps a communication issue. In that case, don’t look for a technical solution. A design system, for example, will not magically fix a workflow issue or route around the problem of designers and developers not talking to each other.

When you commission an accessibility audit, you want to make sure you’re getting the most out of it. Don’t squander it on issues that you can catch and fix yourself. Make sure that the bulk of the audit is being spent on the specific issues that are unique to your site.

Upgrade paths

After I jotted down some quick thoughts last week on the disastrous way that Google Chrome rolled out a breaking change, others have posted more measured and incisive takes:

In fairness to Google, the Chrome team is receiving the brunt of the criticism because they were the first movers. Mozilla and Apple are on baord with making the same breaking change, but Google is taking the lead on this.

As I said in my piece, my issue was less to do with whether confirm(), prompt(), and alert() should be deprecated but more to do with how it was done, and the woeful lack of communication.

Thinking about it some more, I realised that what bothered me was the lack of an upgrade path. Considering that dialog is nowhere near ready for use, it seems awfully cart-before-horse-putting to first remove a feature and then figure out a replacement.

I was chatting to Amber recently and realised that there was a very different example of a feature being deprecated in web browsers…

We were talking about the KeyboardEvent.keycode property. Did you get the memo that it’s deprecated?

But fear not! You can use the KeyboardEvent.code property instead. It’s much nicer to use too. You don’t need to look up a table of numbers to figure out how to refer to a specific key on the keyboard—you use its actual value instead.

So the way that change was communicated was:

Hey, you really shouldn’t use the keycode property. Here’s a better alternative.

But with the more recently change, the communication was more like:

Hey, you really shouldn’t use confirm(), prompt(), or alert(). So go fuck yourself.

Foundations

There was quite a kerfuffle recently about a feature being removed from Google Chrome. To be honest, the details don’t really matter for the point I want to make, but for the record, this was about removing alert and confirm dialogs from cross-origin iframes (and eventually everywhere else too).

It’s always tricky to remove a long-established feature from web browsers, but in this case there were significant security and performance reasons. The problem was how the change was communicated. It kind of wasn’t. So the first that people found out about it about was when things suddenly stopped working (like CodePen embeds).

The Chrome team responded quickly and the change has now been pushed back to next year. Hopefully there will be significant communication before that to let site owners know about the upcoming breakage.

So all’s well that ends well and we’ve all learned a valuable lesson about the importance of communication.

Or have we?

While this was going on, Emily Stark tweeted a more general point about breakage on the web:

Breaking changes happen often on the web, and as a developer it’s good practice to test against early release channels of major browsers to learn about any compatibility issues upfront.

Yikes! To me, this appears wrong on almost every level.

First of all, breaking changes don’t happen often on the web. They are—and should be—rare. If that were to change, the web would suffer massively in terms of predictability.

Secondly, the onus is not on web developers to keep track of older features in danger of being deprecated. That’s on the browser makers. I sincerely hope we’re not expected to consult a site called canistilluse.com.

I wasn’t the only one surprised by this message.

Simon says:

No, no, no, no! One of the best things about developing for the web is that, as a rule, browsers don’t break old code. Expecting every website and application to have an active team of developers maintaining it at all times is not how the web should work!

Edward Faulkner:

Most organizations and individuals do not have the resources to properly test and debug their website against Chrome canary every six weeks. Anybody who published a spec-compliant website should be able to trust that it will keep working.

Evan You:

This statement seriously undermines my trust in Google as steward for the web platform. When did we go from “never break the web” to “yes we will break the web often and you should be prepared for it”?!

It’s worth pointing out that the original tweet was not an official Google announcement. As Emily says right there on her Twitter account:

Opinions are my own.

Still, I was shaken to see such a cavalier attitude towards breaking changes on the World Wide Web. I know that removing dangerous old features is inevitable, but it should also be exceptional. It should not be taken lightly, and it should certainly not be expected to be an everyday part of web development.

It’s almost miraculous that I can visit the first web page ever published in a modern web browser and it still works. Let’s not become desensitised to how magical that is. I know it’s hard work to push the web forward, constantly add new features, while also maintaining backward compatibility, but it sure is worth it! We have collectively banked three decades worth of trust in the web as a stable place to build a home. Let’s not blow it.

If you published a website ten or twenty years ago, and you didn’t use any proprietary technology but only stuck to web standards, you should rightly expect that site to still work today …and still work ten and twenty years from now.

There was something else that bothered me about that tweet and it’s not something that I saw mentioned in the responses. There was an unspoken assumption that the web is built by professional web developers. That gave me a cold chill.

The web has made great strides in providing more and more powerful features that can be wielded in learnable, declarative, forgiving languages like HTML and CSS. With a bit of learning, anyone can make web pages complete with form validation, lazily-loaded responsive images, and beautiful grids that kick in on larger screens. The barrier to entry for all of those features has lowered over time—they used to require JavaScript or complex hacks. And with free(!) services like Netlify, you could literally drag a folder of web pages from your computer into a browser window and boom!, you’ve published to the entire world.

But the common narrative in the web development community—and amongst browser makers too apparently—is that web development has become more complex; so complex, in fact, that only an elite priesthood are capable of making websites today.

Absolute bollocks.

You can choose to make it really complicated. Convince yourself that “the modern web” is inherently complex and convoluted. But then look at what makes it complex and convoluted: toolchains, build tools, pipelines, frameworks, libraries, and abstractions. Please try to remember that none of those things are required to make a website.

This is for everyone. Not just for everyone to consume, but for everyone to make.