Journal tags: cli

16

Progress

The opening of my talk Of Time And The Web deals with our collective negativity bias. The general consensus is that the world has become worse. Crime. Inequality. Poverty. Pollution. Most people think these things are heading in the wrong direction.

But they’re not. Every year the world gets better and better. But it’s happening gradually. Like I said:

If something changes gradually, we don’t notice it. We literally exhibit something called change blindness.

But we are hard-wired to notice sudden changes. We pay attention to moments of change.

“Where were you when JFK was assassinated?”

“Where were you on September 11th?”

Nobody is ever going to ask “where were you when smallpox was eradicated?”

I know it might seem obscene to suggest that the world is getting better given the horrific situation in Gaza and the ongoing quagmire in Ukraine. But the very fact that the world is united in outrage is testament to how far we’ve come.

I try to balance my news intake with more positive stories of progress. Reasons to Be Cheerful is one good source:

We tell stories that reveal that there are, in fact, a surprising number of reasons to feel cheerful. Many of these reasons come in the form of smart, proven, replicable solutions to the world’s most pressing problems. Through sharp reporting, our stories balance a sense of healthy optimism with journalistic rigor, and find cause for hope. We are part magazine, part therapy session, part blueprint for a better world.

Most news outlets don’t operate that way. If it bleeds, it leads.

Even if you’re not actively tracking positive news on a daily or weekly basis, the end of the year feels like a suitable time to step back and take note of our collective progress.

Future Crunch has 66 Good News Stories You Didn’t Hear About in 2023:

The American journalist Krista Tippett says that we’re all fluent enough by now in the language of catastrophe and dysfunction, and what’s needed are more of what she calls ‘generative narratives.’ This year, we found over 2,000 of those kinds of stories, and shared them with tens of thousands of readers in a weekly email. Not dog-on-a-surfboard, baby-survives-a-tornado stories, but genuine, world changing stuff about how millions of lives are improving, about human rights victories, diseases being eliminated, falling emissions, how vast swathes of our planet are being protected and how entire species have been saved.

The Progress Network reports that something good happened every week of 2023:

Despite the wars, emergencies, and crises of 2023, the year was full of substantive good news.

Positive.news has its own round-up. What went right in 2023: the top 25 good news stories of the year:

The ‘golden age of medicine’ arrived, animals came back from the brink, the renewables juggernaut gathered pace, climate reparations became reality and scientists showed how to slow ageing, plus more good news.

On the topic of climate change, the BBC has nine breakthroughs for climate and nature in 2023 you may have missed:

Record-setting spending on clean energy in the US. A clean energy milestone in the world’s power sector. A surge in lawsuits against polluters. A treaty for the oceans 40 years in the making.

This year has seen some remarkable steps forward in tackling the nature and climate crises.

That’s the kind of reporting we need more of. As Kate Marvel wrote in the New York Times, “I’m a Climate Scientist. I’m Not Screaming Into the Void Anymore.”:

In the last decade, the cost of wind energy has declined by 70 percent and solar has declined 90 percent. Renewables now make up 80 percent of new electricity generation capacity. Our country’s greenhouse gas emissions are falling, even as our G.D.P. and population grow.

There’s a pernicious myth that a crisis mindset is necessary to drive change. I think that might be true for short-term emergencies, but it’s counter-productive for long-term problems.

Speaking for myself, I am far more likely to take action if I can see that progress has already been made, and that my actions won’t be pointless. Constant doomerism isn’t just lazy, it’s demotivational. See my excoriating words when reviewing Paolo Bacigalupi’s The Water Knife:

Instead of asking what the future might actually be like, it instead asks “what’s the absolute worst that could happen?” Frankly, it’s a cop-out.

As we head in 2024 it’s worth taking stock of the big-picture improvements we’ve collectively made so that we can continue the work.

If the news headlines continue to get you down, take some time to browse around Our World In Data.

And if you find yourself instinctively rejecting all these reports of progress, ask yourself why that might be. As I said in my talk:

We have this phrase: “sounds too good to be true.”

But we don’t have this phrase: “sounds too bad to be true.”

How green is my server?

The Session does very well in terms of performance. You can see the data from the Chrome UX Report (CRUX).

What’s good for performance is good for the environment. Sure enough, The Session gets a very high score from the website carbon calculator:

Hurrah! This web page achieves a carbon rating of A+

This is cleaner than 99% of all web pages globally

But under the details about hosting it says:

Oh no, it looks like this web page uses bog standard energy

The Session is hosted on DigitalOcean, who tend to be quite tight-lipped about their energy suppliers. Fortunately others have done some sleuthing to figure out which facilities are running on green energy.

One of the locations to get the green thumnbs up is the Amsterdam facility housed by Equinix. That’s where The Session is hosted.

I’m glad that I was able to find out that the site is running on 100% renewable energy, but I wish I didn’t need to go searching to find this out. DigitalOcean need to be a lot more transparent about the energy sources for their hosting facilities.

In between

I was chatting with my new colleague Alex yesterday about a link she had shared in Slack. It was the Nielsen Norman Group’s annual State of Mobile User Experience report.

There’s nothing too surprising in there, other than the mention of Apple’s app clips and Google’s instant apps.

Remember those?

Me neither.

Perhaps I lead a sheltered existence, but as an iPhone user, I don’t think I’ve come across a single app clip in the wild.

I remember when they were announced. I was quite worried about them.

See, the one thing that the web can (theoretically) offer that native can’t is instant access to a resource. Go to this URL—that’s it. Whereas for a native app, the flow is: go to this app store, find the app, download the app.

(I say that the benefit is theoretical because the website found at the URL should download quickly—the reality is that the bloat of “modern” web development imperils that advantage.)

App clips—and instant apps—looked like a way to route around the convoluted install process of native apps. That’s why I was nervous when they were announced. They sounded like a threat to the web.

In reality, the potential was never fulfilled (if my own experience is anything to go by). I wonder why people didn’t jump on app clips and instant apps?

Perhaps it’s because what they promise isn’t desirable from a business perspective: “here’s a way for users to accomplish their tasks without downloading your app.” Even though app clips can in theory be a stepping stone to installing the full app, from a user’s perspective, their appeal is the exact opposite.

Or maybe they’re just too confusing to understand. I think there’s an another technology that suffers from the same problem: progressive web apps.

Hear me out. Progressive web apps are—if done well—absolutely amazing. You get all of the benefits of native apps in terms of UX—they even work offline!—but you retain the web’s frictionless access model: go to a URL; that’s it.

So what are they? Are they websites? Yes, sorta. Are they apps? Yes, sorta.

That’s confusing, right? I can see how app clips and instant apps sound equally confusing: “you can use them straight away, like going to a web page, but they’re not web pages; they’re little bits of apps.”

I’m mostly glad that app clips never took off. But I’m sad that progressive web apps haven’t taken off more. I suspect that their fates are intertwined. Neither suffer from technical limitations. The problem they both face is inertia:

The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.

True of progressive web apps. Equally true of app clips.

But when I was chatting to Alex, she made me look at app clips in a different way. She described a situation where somebody might need to interact with some kind of NFC beacon from their phone. Web NFC isn’t supported in many browsers yet, so you can’t rely on that. But you don’t want to make people download a native app just to have a quick interaction. In theory, an app clip—or instant app—could do the job.

In that situation, app clips aren’t a danger to the web—they’re polyfills for hardware APIs that the web doesn’t yet support!

I love having my perspective shifted like that.

The specific situations that Alex and I were discussing were in the context of museums. Musuems offer such interesting opportunities for the physical and the digital to intersect.

Remember the pen from Cooper Hewitt? Aaron spoke about it at dConstruct 2014—a terrific presentation that’s well worth revisiting and absorbing.

The other dConstruct talk that’s very relevant to this liminal space between the web and native apps is the 2012 talk from Scott Jenson. I always thought the physical web initiative had a lot of promise, but it may have been ahead of its time.

I loved the thinking behind the physical web beacons. They were deliberately dumb, much like the internet itself. All they did was broadcast a URL. That’s it. All the smarts were to be found at the URL itself. That meant a service could get smarter over time. It’s a lot easier to update a website than swap out a piece of hardware.

But any kind of technology that uses Bluetooth, NFC, or other wireless technology has to get over the discovery problem. They’re invisible technologies, so by default, people don’t know they’re even there. But if you make them too discoverable— intrusively announcing themselves like one of the commercials in Minority Report—then they’re indistinguishable from spam. There’s a sweet spot of discoverability right in the middle that’s hard to get right.

Over the past couple of years—accelerated by the physical distancing necessitated by The Situation—QR codes stepped up to the plate.

They still suffer from some discoverability issues. They’re not human-readable, so you can’t be entirely sure that the URL you’re going to go to isn’t going to be a Rick Astley video. But they are visible, which gives them an advantage over hidden wireless technologies.

They’re cheaper too. Printing a QR code sticker costs less than getting a plastic beacon shipped from China.

QR codes turned out to be just good enough to bridge the gap between the physical and digital for those one-off interactions like dining outdoors during a pandemic:

I can see why they chose the web over a native app. Online ordering is the only way to place your order at this place. Telling people “You have to go to this website” …that seems reasonable. But telling people “You have to download this app” …that’s too much friction.

Ironically, the nail in the coffin for app clips and instant apps might’ve been hammered in by Apple and Google when they built QR-code recognition into their camera software.

JavaScript

A recurring theme in my writing and talks is “lay off the JavaScript, people!” But I have to make a conscious effort to specify that I mean client-side JavaScript.

I thought it would be obvious from the context that I was talking about the copious amounts of JavaScript being shipped to end users to download, parse, and execute. But nothing’s ever really obvious. If I don’t explicitly say JavaScript in the browser, then someone inevitably thinks I’m having a go at JavaScript, the language.

I have absolutely nothing against JavaScript the language. Just like I have nothing against Python or Ruby or any other language that you might write with on your machine or your web server. But as soon as you deliver bytes over the wire, I start having opinions. It just so happens that JavaScript is the universal language for client-side coding so that’s why I call for restraint with JavaScript specifically.

There was a time when JavaScript only existed in web browsers. That changed with Node. Now it’s possible to write code for your web server and code for web browsers using the same language. Very handy!

But just because it’s the same language doesn’t mean you should treat it the same in both circumstance. As Remy puts it:

There are two JavaScripts.

One for the server - where you can go wild.

One for the client - that should be thoughtful and careful.

I was reading something recently that referred to Eleventy as a JavaScript library. It really brought me up short. I mean, on the one hand, yes, it’s a library of code and it’s written in JavaScript. It is absolutely technically correct to call it a JavaScript library.

But in my mind, a JavaScript library is something you ship to web browsers—jQuery, React, Vue, and so on. Whereas Eleventy executes its code in order to generate HTML and that’s what gets sent to end users. Conceptually, it’s like the opposite of a JavaScript library. Eleventy does its work before any user requests a URL—JavaScript libraries do their work after a user requests a URL.

To me it seems obvious that there should an entirely different mindset for writing code intended for a web browser. But nothing’s ever really obvious.

I remember when Node was getting really popular and npm came along as a way to manage all the bundles of code that people were assembling in their Node programmes. Makes total sense. But then I thought I heard about people using npm to do the same thing for client-side code. “That can’t be right!” I thought. I must’ve misunderstood. So I talked to someone from npm and explained how I must be misunderstanding something.

But it turned out that people really were treating client-side JavaScript no different than server-side JavaScript. People really were pulling in megabytes of other people’s code to ship to end users so that they could, I dunno, left pad numbers or something.

Listen, I don’t care what you get up to in the privacy of your own codebase. But don’t poison the well of the web with profligate client-side JavaScript.

Solarpunk

My talk on sci-fi and me for Beyond Tellerrand’s Stay Curious event was deliberately designed to be broad and expansive. This was in contrast to Steph’s talk which was deliberately narrow and focused on one topic. Specifically, it was all about solarpunk.

I first heard of solarpunk from Justin Pickard back in 2014 at an event I was hosting. He described it as:

individuals and communities harnessing the power of the photovoltaic solar panel to achieve energy-independence.

The sci-fi subgenre of solarpunk, then, is about these communities. The subgenre sets up to be deliberately positive, even utopian, in contrast to most sci-fi.

Most genres ending with the -punk suffix are about aesthetics. You know the way that cyberpunk is laptops, leather and sunglasses, and steampunk is zeppelins and top hats with goggles. Solarpunk is supposedly free of any such “look.” That said, all the examples I’ve seen seem to converge on the motto of “put a tree on it.” If a depiction of the future looks lush, verdant, fecund and green, chances are it’s solarpunk.

At least, it might be solarpunk. It would have to pass the criteria laid down by the gatekeepers. Solarpunk is manifesto-driven sci-fi. I’m not sure how I feel about that. It’s one thing to apply a category to a piece of writing after it’s been written, but it’s another to start with an agenda-driven category and proceed from there. And as with any kind of classification system, the edges are bound to be fuzzy, leading to endless debates about what’s in and what’s out (see also: UX, UI, service design, content design, product design, front-end development, and most ironically of all, information architecture).

When I met up with Steph to discuss our talk topics and she described the various schools of thought that reside under the umbrella of solarpunk, it reminded me of my college days. You wouldn’t have just one Marxist student group, there’d be multiple Marxist student groups each with their own pillars of identity (Leninist, Trotskyist, anarcho-syndicalist, and so on). From the outside they all looked the same, but woe betide you if you mixed them up. It was exactly the kind of situation that was lampooned in Monty Python’s Life of Brian with its People’s Front of Judea and Judean People’s Front. Steph confirmed that those kind of rifts also exist in solarpunk. It’s just like that bit in Gulliver’s Travels where nations go to war over the correct way to crack an egg.

But there’s general agreement about what broadly constitutes solarpunk. It’s a form of cli-fi (climate fiction) but with an upbeat spin: positive but plausible stories of the future that might feature communities, rewilding, gardening, farming, energy independence, or decentralisation. Centralised authority—in the form of governments and corporations—is not to be trusted.

That’s all well and good but it reminds of another community. Libertarian preppers. Heck, even some of the solarpunk examples feature seasteading (but with more trees).

Politically, preppers and solarpunks couldn’t be further apart. Practically, they seem more similar than either of them would be comfortable with.

Both communities distrust centralisation. For the libertarians, this manifests in a hatred of taxation. For solarpunks, it’s all about getting off the electricity grid. But both want to start their own separate self-sustaining communities.

Independence. Decentralisation. Self-sufficiency.

There’s a fine line between Atlas Shrugged and The Whole Earth Catalog.

The moment after eclipse

I’m almost finished reading a collection of short stories by Brian Aldiss. He was such a prolific writer that he produced loads of these collections, readily available from second-hand bookshops, published on cheap pulpy paper.

This collection is called The Moment Of Eclipse. It’s has some truly weird stories in there, as well as an undisputed classic with Super-Toys Last All Summer Long. I always find it almost unbearably sad.

Only recently, towards the end of the book, did the coincidence of the book’s title strike me: The Moment Of Eclipse.

See, last time I had the privelige of experiencing a total solar eclipse was on August 21st, 2017. Jessica and I were in Sun Valley, Idaho, right in the path of totality. We found a hill to climb up so we could see the surrounding landscape as the shadow of the moon raced across the Earth.

Checked in at Valley View Trail. Hiked up a hill for the eclipse — with Jessica

When it was over, we climbed down the hill and went online. That’s when I found out. Brian Aldiss had passed away.

Web Share API test

Remember a while back I wrote about some odd behaviour with the Web Share API in Safari on iOS?

When the share() method is triggered, iOS provides multiple ways of sharing: Messages, Airdrop, email, and so on. But the simplest option is the one labelled “copy”, which copies to the clipboard.

Here’s the thing: if you’ve provided a text parameter to the share() method then that’s what’s going to get copied to the clipboard—not the URL.

That’s a shame. Personally, I think the url field should take precedence.

Tess filed a bug soon after, which was very gratifying to see.

Now Phil has put together a test case:

  1. Share URL, title, and text
  2. Share URL and title
  3. Share URL and text

Very handy! The results (using the “copy” to clipboard action) are somewhat like rock, paper, scissors:

  • URL beats title,
  • text beats URL,
  • nothing beats text.

So it’s more like rock, paper, high explosives.

Mental models

I’ve found that the older I get, the less I care about looking stupid. This is remarkably freeing. I no longer have any hesitancy about raising my hand in a meeting to ask “What’s that acronym you just mentioned?” This sometimes has the added benefit of clarifying something for others in the room who might have been to shy to ask.

I remember a few years back being really confused about npm. Fortunately, someone who was working at npm at the time came to Brighton for FFConf, so I asked them to explain it to me.

As I understood it, npm was intended to be used for managing packages of code for Node. Wasn’t it actually called “Node Package Manager” at one point, or did I imagine that?

Anyway, the mental model I had of npm was: npm is to Node as PEAR is to PHP. A central repository of open source code projects that you could easily add to your codebase …for your server-side code.

But then I saw people talking about using npm to manage client-side JavaScript. That really confused me. That’s why I was asking for clarification.

It turns out that my confusion was somewhat warranted. The npm project had indeed started life as a repo for server-side code but had since expanded to encompass client-side code too.

I understand how it happened, but it confirmed a worrying trend I had noticed. Developers were writing front-end code as though it were back-end code.

On the one hand, that makes total sense when you consider that the code is literally in the same programming language: JavaScript.

On the other hand, it makes no sense at all! If your code’s run-time is on the server, then the size of the codebase doesn’t matter that much. Whether it’s hundreds or thousands of lines of code, the execution happens more or less independentally of the network. But that’s not how front-end development works. Every byte matters. The more code you write that needs to be executed on the user’s device, the worse the experience is for that user. You need to limit how much you’re using the network. That means leaning on what the browser gives you by default (that’s your run-time environment) and keeping your code as lean as possible.

Dave echoes my concerns in his end-of-the-year piece called The Kind of Development I Like:

I now think about npm and wonder if it’s somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we’ve co-opted on the client and I think we’re feeling those repercussions in the browser.

Writing back-end and writing front-end code require very different approaches, in my opinion. But those differences have been erased in “modern” JavaScript.

The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs.

In a funny way, this situation reminds me of something I saw happening over twenty years ago. Print designers were starting to do web design. They had a wealth of experience and knowledge around colour theory, typography, hierarchy and contrast. That was all very valuable to bring to the world of the web. But the web also has fundamental differences to print design. In print, you can use as many typefaces as you want, whereas on the web, to this day, you need to be judicious in the range of fonts you use. But in print, you might have to limit your colour palette for cost reasons (depending on the printing process), whereas on the web, colours are basically free. And then there’s the biggest difference of all: working within known dimensions of a fixed page in print compared to working within the unknowable dimensions of flexible viewports on the web.

Fast forward to today and we’ve got a lot of Computer Science graduates moving into front-end development. They’re bringing with them a treasure trove of experience in writing robust scalable code. But web browsers aren’t like web servers. If your back-end code is getting so big that it’s starting to run noticably slowly, you can throw more computing power at it by scaling up your server. That’s not an option on the front-end where you don’t really have one run-time environment—your end users have their own run-time environment with its own constraints around computing power and network connectivity.

That’s a very, very challenging world to get your head around. The safer option is to stick to the mental model you’re familiar with, whether you’re a print designer or a Computer Science graduate. But that does a disservice to end users who are relying on you to deliver a good experience on the World Wide Web.

The Web Share API in Safari on iOS

I implemented the Web Share API over on The Session back when it was first available in Chrome in Android. It’s a nifty and quite straightforward API that allows websites to make use of the “sharing drawer” that mobile operating systems provide from within a web browser.

I already had sharing buttons that popped open links to Twitter, Facebook, and email. You can see these sharing buttons on individual pages for tunes, recordings, sessions, and so on.

I was already intercepting clicks on those buttons. I didn’t have to add too much to also check for support for the Web Share API and trigger that instead:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      text: document.querySelector('meta[name="description"]').getAttribute('content'),
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

That worked a treat. As you can see, there are three fields you can pass to the share() method: title, text, and url. You don’t have to provide all three.

Earlier this year, Safari on iOS shipped support for the Web Share API. I didn’t need to do anything. ‘Cause that’s how standards work. You can make use of APIs before every browser supports them, and then your website gets better and better as more and more browsers add support.

But I recently discovered something interesting about the iOS implementation.

When the share() method is triggered, iOS provides multiple ways of sharing: Messages, Airdrop, email, and so on. But the simplest option is the one labelled “copy”, which copies to the clipboard.

Here’s the thing: if you’ve provided a text parameter to the share() method then that’s what’s going to get copied to the clipboard—not the URL.

That’s a shame. Personally, I think the url field should take precedence. But I don’t think this is a bug, per se. There’s nothing in the spec to say how operating systems should handle the data sent via the Web Share API. Still, I think it’s a bit counterintuitive. If I’m looking at a web page, and I opt to share it, then surely the URL is the most important piece of data?

I’m not even sure where to direct this feedback. I guess it’s under the purview of the Safari team, but it also touches on OS-level interactions. Either way, I hope that somebody at Apple will consider changing the current behaviour for copying Web Share data to the clipboard.

In the meantime, I’ve decided to update my code to remove the text parameter:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

If the behaviour of Safari on iOS changes, I’ll reinstate the missing field.

By the way, if you’re making progressive web apps that have display: standalone in the web app manifest, please consider using the Web Share API. When you remove the browser chrome, you’re removing the ability for users to easily share URLs. The Web Share API gives you a way to reinstate that functionality.

Travel talk

It’s been a busy two weeks of travelling and speaking. Last week I spoke at Finch Conf in Edinburgh, Code Motion in Madrid, and Generate CSS in London. This week I was at Indie Web Camp, View Source, and Fronteers, all in Amsterdam.

The Edinburgh-Madrid-London whirlwind wasn’t ideal. I gave the opening talk at Finch Conf, then immediately jumped in a taxi to get to the airport to fly to Madrid, so I missed all the excellent talks. I had FOMO for a conference I actually spoke at.

I did get to spend some time at Code Motion in Madrid, but that was a waste of time. It was one of those multi-track events where the trade show floor is prioritised over the talks (and the speakers don’t get paid). I gave my talk to a mostly empty room—the classic multi-track experience. On the plus side, I had a wonderful time with Jessica exploring Madrid’s many tapas delights. The food and drink made up for the sub-par conference.

I flew back from Madrid to the UK, and immediately went straight to London to deliver the closing talk of Generate CSS. So once again, I didn’t get to see any of the other talks. That’s a real shame—it sounds like they were all excellent.

The day after Generate though, I took the Eurostar to Amsterdam. That’s where I’ve been ever since. There were just as many events as in the previous week, but because they were all in Amsterdam, I could savour them properly, instead of spending half my time travelling.

Indie Web Camp Amsterdam was excellent, although I missed out on the afternoon discussions on the first day because I popped over to the Mozilla Tech Speakers event happening at the same time. I was there to offer feedback on lightning talks. I really, really enjoyed it.

I’d really like to do more of this kind of thing. There aren’t many activities I feel qualified to give advice on, but public speaking is an exception. I’ve got plenty of experience that I’m eager to share with up-and-coming speakers. Also, I got to see some really great lightning talks!

Then it was time for View Source. There was a mix of talks, panels, and breakout conversation corners. I saw some fantastic talks by people I hadn’t seen speak before: Melanie Richards, Ali Spittal, Sharell Bryant, and Tejas Kumar. I gave the closing keynote, which was warmly received—that’s always very gratifying.

After one day of rest, it was time for Fronteers. This was where myself and Remy gave the joint talk we’ve been working on:

Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.

I’m happy to say that it went off without a hitch. Remy definitely had the tougher task—he did a live demo. Needless to say, he did it flawlessly. It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

I’ve got some more speaking engagements ahead of me. Most of them are in Europe so I’m going to do my utmost to travel to them by train. Flying is usually more convenient but it’s terrible for my carbon footprint. I’m feeling pretty guilty about that Madrid trip; I need to make ammends.

I’ll be travelling to France next week for Paris Web. Taking the Eurostar is a no-brainer for that one. Straight after that Jessica and I will be going to Frankfurt for the book fair. Taking the train from Paris to Frankfurt will be nice and straightforward.

I’ll be back in Brighton for Indie Web Camp on the weekend of October 19th and 20th—you should come!—and then I’ll be heading off to Antwerp for Full Stack Fest. Anywhere in Belgium is easily reachable by train so that’ll be another Eurostar journey.

After that, it gets a little trickier. I’ll be going to Berlin for Beyond Tellerrand but I’m not sure I can make it work by train. Same goes for Web Clerks in Vienna. Cities that far east are tough to get to by train in a reasonable amount of time (although I realise that, compared to many others, I have the luxury of spending time travelling by train).

Then there are the places that I can only get to by plane. There’s the United States. I’ll be speaking at An Event Apart in San Francisco in December. A flight is unavoidable. Last time we went to the States, Jessica and I travelled by ocean liner. But that isn’t any better for the environment, given the low-grade fuel burned by ships.

And then there’s Ireland. I make trips back there to see my mother, but there’s no alternative to flying or taking a ferry—neither are ideal for the environment. At least I can offset the carbon from my flights; the travel equivalent to putting coins in the swear jar.

Don’t get me wrong—I’m not moaning about the amount of travel involved in going to conferences and workshops. It’s fantastic that I get to go to new and interesting places. That’s something I hope I never take for granted. But I can’t ignore the environmental damage I’m doing. I’ll be making more of an effort to travel by train to Europe’s many excellent web events. While I’m at it, I can ask Paul for his trainspotter expertise.

Split

When I talk about evaluating technology for front-end development, I like to draw a distinction between two categories of technology.

On the one hand, you’ve got the raw materials of the web: HTML, CSS, and JavaScript. This is what users will ultimately interact with.

On the other hand, you’ve got all the tools and technologies that help you produce the HTML, CSS, and JavaScript: pre-processors, post-processors, transpilers, bundlers, and other build tools.

Personally, I’m much more interested and excited by the materials than I am by the tools. But I think it’s right and proper that other developers are excited by the tools. A good balance of both is probably the healthiest mix.

I’m never sure what to call these two categories. Maybe the materials are the “external” technologies, because they’re what users will interact with. Whereas all the other technologies—that mosty live on a developer’s machine—are the “internal” technologies.

Another nice phrase is something I heard during Chris’s talk at An Event Apart in Seattle, when he quoted Brad, who talked about the front of the front end and the back of the front end.

I’m definitely more of a front-of-the-front-end kind of developer. I have opinions on the quality of the materials that get served up to users; the output should be accessible and performant. But I don’t particularly care about the tools that produced those materials on the back of the front end. Use whatever works for you (or whatever works for your team).

As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time. Like I said, from a user’s point of view, it’s irrelevant what text editor or version control system you use.

Now, you could make the argument that anything that is good for developer convenience is automatically good for user experience because faster, more efficient development should result in better output. While that’s true in theory, I highly recommend Alex’s post, The “Developer Experience” Bait-and-Switch.

Where it gets interesting is when a technology that’s designed for developer convenience is made out of the very materials being delivered to users. For example, a CSS framework like Bootstrap is made of CSS. That’s different to a tool like Sass which outputs CSS. Whether or not a developer chooses to use Sass is irrelevant to the user—the final output will be CSS either way. But if a developer chooses to use a CSS framework, that decision has a direct impact on the user experience. The user must download the framework in order for the developer to get the benefit.

So whereas Sass sits at the back of the front end—where I don’t care what you use—Bootstrap sits at the front of the front end. For tools like that, I don’t think saying “use whatever works for you” is good enough. It’s got to be weighed against the cost to the user.

Historically, it’s been a similar story with JavaScript libraries. They’re written in JavaScript, and so they’re going to be executed in the browser. If a developer wanted to use jQuery to make their life easier, the user paid the price in downloading the jQuery library.

But I’ve noticed a welcome change with some of the bigger JavaScript frameworks. Whereas the initial messaging around frameworks like React touted the benefits of state management and the virtual DOM, I feel like that’s not as prevalent now. You’re much more likely to hear people—quite rightly—talk about the benefits of modularity and componentisation. If you combine that with the rise of Node—which means that JavaScript is no longer confined to the browser—then these frameworks can move from the front of the front end to the back of the front end.

We’ve certainly seen that at Clearleft. We’ve worked on multiple React projects, but in every case, the output was server-rendered. Developers get the benefit of working with a tool that helps them. Users don’t pay the price.

For me, this question of whether a framework will be used on the client side or the server side is crucial.

Let me tell you about a Clearleft project that sticks in my mind. We were working with a big international client on a product that was going to be rolled out to students and teachers in developing countries. This was right up my alley! We did plenty of research into network conditions and typical device usage. That then informed a tight performance budget. Every design decision—from web fonts to images—was informed by that performance budget. We were producing lean, mean markup, CSS, and JavaScript. But we weren’t the ones implementing the final site. That was being done by the client’s offshore software team, and they insisted on using React. “That’s okay”, I thought. “React can be used server-side so we can still output just what’s needed, right?” Alas, no. These developers did everything client side. When the final site launched, the log-in screen alone required megabytes of JavaScript just to render a form. It was, in my opinion, entirely unfit for purpose. It still pains me when I think about it.

That was a few years ago. I think that these days it has become a lot easier to make the decision to use a framework on the back of the front end. Like I said, that’s certainly been the case on recent Clearleft projects that involved React or Vue.

It surprises me, then, when I see the question of server rendering or client rendering treated almost like an implementation detail. It might be an implementation detail from a developer’s perspective, but it’s a key decision for the user experience. The performance cost of putting your entire tech stack into the browser can be enormous.

Alex Sanders from the development team at The Guardian published a post recently called Revisiting the rendering tier . In it, he describes how they’re moving to React. Now, if this were a move to client-rendered React, that would make a big impact on the user experience. The thing is, I couldn’t tell from the article whether React was going to be used in the browser or on the server. The article talks about “rendering”—which is something that browsers do—and “the DOM”—which is something that only exists in browsers.

So I asked. It turns out that this plan is very much about generating HTML and CSS on the server before sending it to the browser. Excellent!

With that question answered, I’m cool with whatever they choose to use. In this case, they’re choosing to use CSS-in-JS (although, to be pedantic, there’s no C anymore so technically it’s SS-in-JS). As long as the “JS” part is JavaScript on a server, then it makes no difference to the end user, and therefore no difference to me. Not my circus, not my monkeys. For users, the end result is the same whether styling is applied via a selector in an external stylesheet or, for example, via an inline style declaration (and in some situations, a server-rendered CSS-in-JS solution might be better for performance). And so, as a user-centred developer, this is something that I don’t need to care about.

Except…

I have misgivings. But just to be clear, these misgivings have nothing to do with users. My misgivings are entirely to do with another group of people: the people who make websites.

There’s a second-order effect. By making React—or even JavaScript in general—a requirement for styling something on a web page, the barrier to entry is raised.

At least, I think that the barrier to entry is raised. I completely acknowledge that this is a subjective judgement. In fact, the reason why a team might decide to make JavaScript a requirement for participation might well be because they believe it makes it easier for people to participate. Let me explain…

It wasn’t that long ago that devs coming from a Computer Science background were deriding CSS for its simplicity, complaining that “it’s broken” and turning their noses up at it. That rhetoric, thankfully, is waning. Nowadays they’re far more likely to acknowledge that CSS might be simple, but it isn’t easy. Concepts like the cascade and specificity are real head-scratchers, and any prior knowledge from imperative programming languages won’t help you in this declarative world—all your hard-won experience and know-how isn’t fungible. Instead, it seems as though all this cascading and specificity is butchering the modularity of your nicely isolated components.

It’s no surprise that programmers with this kind of background would treat CSS as damage and find ways to route around it. The many flavours of CSS-in-JS are testament to this. From a programmer’s point of view, this solution has made things easier. Best of all, as long as it’s being done on the server, there’s no penalty for end users. But now the price is paid in the diversity of your team. In order to participate, a Computer Science programming mindset is now pretty much a requirement. For someone coming from a more declarative background—with really good HTML and CSS skills—everything suddenly seems needlessly complex. And as Tantek observed:

Complexity reinforces privilege.

The result is a form of gatekeeping. I don’t think it’s intentional. I don’t think it’s malicious. It’s being done with the best of intentions, in pursuit of efficiency and productivity. But these code decisions are reflected in hiring practices that exclude people with different but equally valuable skills and perspectives.

Rachel describes HTML, CSS and our vanishing industry entry points:

If we make it so that you have to understand programming to even start, then we take something open and enabling, and place it back in the hands of those who are already privileged.

I think there’s a comparison here with toxic masculinity. Toxic masculinity is obviously terrible for women, but it’s also really shitty for men in the way it stigmatises any male behaviour that doesn’t fit its worldview. Likewise, if the only people your team is interested in hiring are traditional programmers, then those programmers are going to resent having to spend their time dealing with semantic markup, accessibility, styling, and other disciplines that they never trained in. Heydon correctly identifies this as reluctant gatekeeping:

By assuming the role of the Full Stack Developer (which is, in practice, a computer scientist who also writes HTML and CSS), one takes responsibility for all the code, in spite of its radical variance in syntax and purpose, and becomes the gatekeeper of at least some kinds of code one simply doesn’t care about writing well.

This hurts everyone. It’s bad for your team. It’s even worse for the wider development community.

Last year, I was asked “Is there a fear or professional challenge that keeps you up at night?” I responded:

My greatest fear for the web is that it becomes the domain of an elite priesthood of developers. I firmly believe that, as Tim Berners-Lee put it, “this is for everyone.” And I don’t just mean it’s for everyone to use—I believe it’s for everyone to make as well. That’s why I get very worried by anything that raises the barrier to entry to web design and web development.

I’ve described a number of dichotomies here:

  • Materials vs. tools,
  • Front of the front end vs. back of the front end,
  • User experience vs. developer experience,
  • Client-side rendering vs. server-side rendering,
  • Declarative languages vs. imperative languages.

But the split that worries the most is this:

  • The people who make the web vs. the people who are excluded from making the web.

60 seconds over Idaho

I lived in Germany for the latter half of the nineties. On August 11th, 1999, parts of Germany were in the path of a total eclipse of the sun. Freiburg—the town where I was living—wasn’t in the path, so Jessica and I travelled north with some friends to Karlsruhe.

The weather wasn’t great. There was quite a bit of cloud coverage, but at the moment of totality, the clouds had thinned out enough for us to experience the incredible sight of a black sun.

(The experience was only slightly marred by the nearby idiot who took a picture with the flash on right before totality. Had my eyesight not adjusted in time, he would still be carrying that camera around with him in an anatomically uncomfortable place.)

Eighteen years and eleven days later, Jessica and I climbed up a hill to see our second total eclipse of the sun. The hill is in Sun Valley, Idaho.

Here comes the sun.

Travelling thousands of miles just to witness something that lasts for a minute might seem disproportionate, but if you’ve ever been in the path of totality, you’ll know what an awe-inspiring sight it is (if you’ve only seen a partial eclipse, trust me—there’s no comparison). There’s a primitive part of your brain screaming at you that something is horribly, horribly wrong with the world, while another part of your brain is simply stunned and amazed. Then there’s the logical part of your brain which is trying to grasp the incredible good fortune of this cosmic coincidence—that the sun is 400 times bigger than the moon and also happens to be 400 times the distance away.

This time viewing conditions were ideal. Not a cloud in the sky. It was beautiful. We even got a diamond ring.

I like to think I can be fairly articulate, but at the moment of totality all I could say was “Oh! Wow! Oh! Holy shit! Woah!”

Totality

Our two eclipses were separated by eighteen years, but they’re connected. The Saros 145 cycle has been repeating since 1639 and will continue until 3009, although the number of total eclipses only runs from 1927 to 2648.

Eighteen years and twelve days ago, we saw the eclipse in Germany. Yesterday we saw the eclipse in Idaho. In eighteen years and ten days time, we plan to be in Japan or China.

Certbot renewals with Apache

I wrote a while back about switching to HTTPS on Apache 2.4.7 on Ubuntu 14.04 on Digital Ocean. In that post, I pointed to an example .conf file.

I’ve been having a few issues with my certificate renewals with Certbot (the artist formerly known as Let’s Encrypt). If I did a dry-run for renewing my certificates…

/etc/certbot-auto renew --dry-run

… I kept getting this message:

Encountered vhost ambiguity but unable to ask for user guidance in non-interactive mode. Currently Certbot needs each vhost to be in its own conf file, and may need vhosts to be explicitly labelled with ServerName or ServerAlias directories. Falling back to default vhost *:443…

It turns out that Certbot doesn’t like HTTP and HTTPS configurations being lumped into one .conf file. Instead it expects to see all the port 80 stuff in a domain.com.conf file, and the port 443 stuff in a domain.com-ssl.conf file.

So I’ve taken that original .conf file and split it up into two.

First I SSH’d into my server and went to the Apache directory where all these .conf files live:

cd /etc/apache2/sites-available

Then I copied the current (single) file to make the SSL version:

cp yourdomain.com.conf yourdomain.com-ssl.conf

Time to fire up one of those weird text editors to edit that newly-created file:

nano yourdomain.com-ssl.conf

I deleted everything related to port 80—all the stuff between (and including) the VirtualHost *:80 tags:

<VirtualHost *:80>
...
</VirtualHost>

Hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I do the opposite for the original file:

nano yourdomain.com.conf

Delete everything related to VirtualHost *:443:

<VirtualHost *:443>
...
</VirtualHost>

Once again, I hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I need to tell Apache about the new .conf file:

a2ensite yourdomain.com-ssl.conf

I’m told that’s cool and all, but that I need to restart Apache for the changes to take effect:

service apache2 restart

Now when I test the certificate renewing process…

/etc/certbot-auto renew --dry-run

…everything goes according to plan.

Where to start?

A lot of the talks at this year’s Chrome Dev Summit were about progressive web apps. This makes me happy. But I think the focus is perhaps a bit too much on the “app” part on not enough on “progressive”.

What I mean is that there’s an inevitable tendency to focus on technologies—Service Workers, HTTPS, manifest files—and not so much on the approach. That’s understandable. The technologies are concrete, demonstrable things, whereas approaches, mindsets, and processes are far more nebulous in comparison.

Still, I think that the most important facet of building a robust, resilient website is how you approach building it rather than what you build it with.

Many of the progressive app demos use server-side and client-side rendering, which is great …but that aspect tends to get glossed over:

Browsers without service worker support should always be served a fall-back experience. In our demo, we fall back to basic static server-side rendering, but this is only one of many options.

I think it’s vital to not think in terms of older browsers “falling back” but to think in terms of newer browsers getting a turbo-boost. That may sound like a nit-picky semantic subtlety, but it’s actually a radical difference in mindset.

Many of the arguments I’ve heard against progressive enhancement—like Tom’s presentation at Responsive Field Day—talk about the burdensome overhead of having to bolt on functionality for older or less-capable browsers (even Jake has done this). But the whole point of progressive enhancement is that you start with the simplest possible functionality for the greatest number of users. If anything gets bolted on, it’s the more advanced functionality for the newer or more capable browsers.

So if your conception of progressive enhancement is that it’s an added extra, I think you really need to turn that thinking around. And that’s hard. It’s hard because you need to rewire some well-engrained pathways.

There is some precedence for this though. It was really, really hard to convince people to stop using tables for layout and starting using CSS instead. That was a tall order—completely change the way you approach building on the web. But eventually we got there.

When Ethan came out with Responsive Web Design, it was an equally difficult pill to swallow, not because of the technologies involved—media queries, percentages, etc.—but because of the change in thinking that was required. But eventually we got there.

These kinds of fundamental changes are inevitably painful …at first. After years of building websites using tables for layout, creating your first CSS-based layout was demoralisingly difficult. But the second time was a bit easier. And the third time, easier still. Until eventually it just became normal.

Likewise with responsive design. After years of building fixed-width websites, trying to build in a fluid, flexible way was frustratingly hard. But the second time wasn’t quite as hard. And the third time …well, eventually it just became normal.

So if you’re used to thinking of the all-singing, all-dancing version of your site as the starting point, it’s going to be really, really hard to instead start by building the most basic, accessible version first and then work up to the all-singing, all-dancing version …at first. But eventually it will just become normal.

For now, though, it’s going to take work.

The recent redesign of Google+ is true case study in building a performant, responsive, progressive site:

With server-side rendering we make sure that the user can begin reading as soon as the HTML is loaded, and no JavaScript needs to run in order to update the contents of the page. Once the page is loaded and the user clicks on a link, we do not want to perform a full round-trip to render everything again. This is where client-side rendering becomes important — we just need to fetch the data and the templates, and render the new page on the client. This involves lots of tradeoffs; so we used a framework that makes server-side and client-side rendering easy without the downside of having to implement everything twice — on the server and on the client.

This took work. Had they chosen to rely on client-side rendering alone, they could have built something quicker. But I think it was worth laying that solid foundation. And the next time they need to build something this way, it’s going to be less work. Eventually it just becomes normal.

But it all starts with thinking of the server-side rendering as the default. Server-side rendering is not a fallback; client-side rendering is an enhancement.

That’s exactly the kind of mindset that enables Jack Franklin to build robust, resilient websites:

Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end.

I had a chance to chat briefly with Jack at the Edge conference in London and I congratulated him on the launch of a Go Cardless site that used exactly this technique. He told me that the decision to flip the switch and make it act as a single page app came right at the end of the project. Server-side rendering was the default; client-side rendering was added later.

The key to building modern, resilient, progressive sites doesn’t lie in browser technologies or frameworks; it lies in how we think about the task at hand; how we approach building from the ground up rather than the top down. Changing the way we fundamentally think about building for the web is inevitably going to be challenging …at first. But it will also be immensely rewarding.

Communication for America

Mandy has written a great article about making remote teams work. It’s an oft-neglected aspect of working on a product when you’ve got people distributed geographically.

But remote communication isn’t just something that’s important for startups and product companies—it’s equally important for agencies when it comes to client communication.

At Clearleft, we occasionally work with clients right here in Brighton, but that’s the exception. More often than not, the clients are based in London, or somewhere else in the UK. In the case of Code for America, they’re based in San Francisco—that’s eight or nine timezones away (depending on the time of year).

As it turned out, it wasn’t a problem at all. In fact, it worked out nicely. At the end of every day, we had a quick conference call, with two or three people at our end, and two or three people at their end. For us, it was the end of the day: 5:30pm. For them, the day was just starting: 9:30am.

We’d go through what we had been doing during that day, ask any questions that had cropped up over the course of the day, and let them know if there was anything we needed from them. If there was anything we needed from them, they had the whole day to put it together while we went home. The next morning (from our perspective), it would be waiting in our in/drop-boxes.

Meanwhile, from the perspective of Code for America, they were coming into the office every morning and starting the day with a look over our work, as though we had beavering away throughout the night.

Now, it would be easy for me to extrapolate from this that this way of working is great and everyone should do it. But actually, the whole timezone difference was a red herring. The real reason why the communication worked so well throughout the project was because of the people involved.

Right from the start, it was clear that because of time and budget constraints that we’d have to move fast. We wouldn’t have the luxury of debating everything in detail and getting every decision signed off. Instead we had a sort of “rough consensus and running code” approach that worked really well. It worked because everyone understood that was what was happening—if just one person was expecting a more formalised structure, I’m sure it wouldn’t have gone quite so smoothly.

So we provided materials in whatever level of fidelity made sense for the idea under discussion. Sometimes that was a quick sketch. Sometimes it was a fairly high-fidelity mockup. Sometimes it was a module of markup and CSS. Whatever it took.

Most of all, there was a great feeling of trust on both sides of the equation. It was clear right from the start that the people at Code for America were super-smart and weren’t going to make any outlandish or unreasonable requests of Clearleft. Instead they gave us just the right amount of guidance and constraints, while trusting us to make good decisions.

At one point, Jon was almost complaining about not getting pushback on his designs. A nice complaint to have.

Because of the daily transatlantic “stand up” via teleconference, there was a great feeling of inevitability to the project as it came together from idea to execution. Inevitability doesn’t sound like a very sexy attribute of a web project, but it’s far preferable to the kind of project that involves milestones of “big reveals”—the Mad Men approach to project management.

Oh, and we made sure that we kept those transatlantic calls nice and short. They never lasted longer than 10 or 15 minutes. We wanted to avoid the many pitfalls of conference calls.

Shape shifting

I’ve been with the same ISP for years: Eclipse Internet. I never had any cause to complain until recently. I’ve been finding that certain types of requests simply weren’t completing; file uploads, some forms, Ajax requests…

I started googling for any similar reports. I found quite a few forum posts, all of them expressing the same sentiment; that Eclipse used to be good but that since getting bought out by Kingston, service had really gone downhill. Most alarmingly of all, I read reports of .

I called up technical support. They didn’t deny it. Instead, they tried to upsell me on a package that would give me an admin panel to allow more control over exactly how my packets were throttled.

Screw. That.

I started shopping around for a new ISP. When I twittered what I was doing, I received some good recommendations. Eventually, I was able to narrow my search down to two providers who both sounded good: Zen and IDNet. With little else to distinguish between the two companies, their respective websites became the deciding factor. That settled it. IDNet was the clear winner. Not only is their site prettier, it also validates and even uses hCard.

To cancel my Eclipese account, I needed to call them to request a . Of course the reason why they make you call is so that they can try to persuade not to leave. Sure enough, the guy who took my call asked And can you tell me why you’re moving away from Eclipse?

Traffic shaping, I responded.

Ah right, he said. Technical reasons.

I corrected him. Moral reasons, actually.

I got the info I needed. I ordered my new broadband service. Today I switched over.

If you want to find out if your broadband provider is a filthy traffic-shaper, check to see if it’s listed on .

If you find yourself changing to a more ethical ISP, you too will probably be asked to explain why you’re jumping ship. Just tell them what you want:

Fat Pipe, Always On, Get Out of the Way.