Journal tags: http

21

Increment by increment

The bedrock of the World Wide Web is solid. Built atop the protocols of the internet (TCP/IP), its fundamental building blocks remain: URLs of HTML files transmitted over HTTP. Baldur Bjarnason writes:

Even today, the web is like living fossil, a preserved relic from a different era. Anybody can put up a website. Anybody can run a business over it. I can build an app or service, send the URL to anybody I like, and most people in the world will be able to run it without asking anybody’s permission.

Still, the web has evolved. In fact, that evolution is something that’s also built into its fundamental design. Rather than try to optimise the World Wide Web for one particular use-case, Tim Berners-Lee realised the power of being flexible. Like the internet, the World Wide Web is deliberately dumb.

(I get very annoyed when people talk about the web as being designed for scientific work at CERN. That was merely the first use-case. The web was designed for everything …and nothing in particular.)

Robin Berjon compares the web’s evolution to the ship of Theseus:

That’s why it’s been so hard to agree about what the Web is: the Web is architected for resilience which means that it adapts and transforms. That flexibility is the reason why I’m talking about some mythological dude’s boat. Altogether too often, we consider some aspects of the Web as being invariants when they’re potentially just as replaceable as any other part. This isn’t to say that there are no invariants on the Web.

The web can be changed. That’s both a comfort and a warning. There’s plenty that we should change about today’s web. But there’s also plenty—at the root level—that we should fight to preserve.

And if you want change, the worst way to go about it is to promulgate the notion of burning everything down and starting from scratch. As Erin says in the fourth and final part of her devastating series on Meta in Myanmar:

We don’t get a do-over planet. We won’t get a do-over network.

Instead, we have to work with the internet we made and find a way to rebuild and fortify it to support the much larger projects of repair—political, cultural, environmental—that are required for our survival.

Though, as Robin points out, that doesn’t preclude us from sharing a vision:

Proceeding via small, incremental changes can be a laudable approach, but even then it helps to have a sense for what it is that those small steps are supposed to be incrementing towards.

I’m looking forward to reading what Robin puts forward, particularly because he says “I’m no technosolutionist.”

From a technical perspective, the web has never been better. We have incredible features in HTML, CSS, and JavaScript, all standardised and with amazing interoperability between browsers. The challenges that face the web today are not technical.

That’s one of the reasons why I have no patience for the web3 crowd. Apart from the ridiculous name, they’re focusing on exactly the wrong part of the stack.

Listening to their pitch, they’ll point out that while yes, the fundamental bedrock of the web is indeed decentralised—TCP/IP, HTTP(S)—what’s been constructed on that foundation is increasingly centralised; the power brokers of Google, Meta, Amazon.

And what’s the solution they propose? Replace the underlying infrastructure with something-something-blockchain.

Would that it were so simple.

The problems of today’s web are not technical in nature. The problems of today’s web won’t be solved by technology. If we’re going to solve the problems of today’s web, we’ll need to do it through law, culture, societal norms, and co-operation.

(Feel free to substitute “today’s web” with “tomorrow’s climate”.)

02022-02-22

Eleven years ago, I made a prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

One year later, Matt called me on it and the prediction officially became a bet:

We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.

I’m very happy to lose this bet.

When I made the original prediction eleven years ago that a URL on the longbets.org site would no longer be available, I did so in a spirit of mischief—it was a deliberately meta move. But it was also informed by a genuine feeling of pessimism around the longevity of links on the web. While that pessimism was misplaced in this case, it was informed by data.

The lifetime of a URL on the web remains shockingly short. What I think has changed in the intervening years is that people may have become more accustomed to the situation. People used to say “once something is online it’s there forever!”, which infuriated me because the real problem is the exact opposite: if you put something online, you have to put in real effort to keep it online. After all, we don’t really buy domain names; we just rent them. And if you publish on somebody else’s domain, you’re at their mercy: Geocities, MySpace, Facebook, Medium, Twitter.

These days my view towards the longevity of online content has landed somewhere in the middle of the two dangers. There’s a kind of Murphy’s Law around data online: anything that you hope will stick around will probably disappear and anything that you hope will disappear will probably stick around.

One huge change in the last eleven years that I didn’t anticipate is the migration of websites to HTTPS. The original URL of the prediction used HTTP. I’m glad to see that original URL now redirects to a more secure protocol. Just like most of the World Wide Web. I think we can thank Let’s Encrypt for that. But I think we can also thank Edward Snowden. We are no longer as innocent as we were eleven years ago.

I think if I could tell my past self that most of the web would using HTTPS by 2022, my past self would be very surprised …’though not as surprised at discovering that time travel had also apparently been invented.

The Internet Archive has also been a game-changer for digital preservation. While it’s less than ideal that something isn’t reachable at its original URL, knowing that there’s probably a copy of the content at archive.org lessens the sting considerably. I couldn’t be happier that this fine institution is the recipient of the stakes of this bet.

SafarIE

I was moaning about Safari recently. Specifically I was moaning about the ridiculous way that browser updates are tied to operating system updates.

But I felt bad bashing Safari. It felt like a pile-on. That’s because a lot of people have been venting their frustrations with Safari recently:

I think it’s good that people share their frustrations with browsers openly, although I agree with Baldur Bjarnason that’s good to avoid Kremlinology and the motivational fallacy when blogging about Apple.

It’s also not helpful to make claims like “Safari is the new Internet Explorer!” Unless, that is, you can back up the claim.

On a recent episode of the HTTP 203 podcast, Jake and Surma set out to test the claim that Safari is the new IE. They did it by examining Safari according to a number of different measurements and comparing it to the olden days of Internet Explorer. The result is a really fascinating trip down memory lane along with a very nuanced and even-handed critique of Safari.

And the verdict? Well, you’ll just to have to listen to the podcast episode.

If you’d rather read the transcript, tough luck. That’s a real shame because, like I said, it’s an excellent and measured assessment. I’d love to add it to the links section of my site, but I can’t do that in good conscience while it’s inaccessible to the Deaf community.

When I started the Clearleft podcast, it was a no-brainer to have transcripts of every episode. Not only does it make the content more widely available, but it also makes it easier for people to copy and paste choice quotes.

Still, I get it. A small plucky little operation like Google isn’t going to have the deep pockets of a massive corporation like Clearleft. But if Jake and Surma were to open up a tip jar, I’d throw some money in to get HTTP 203 transcribed (I recommend getting Tina Pham to do it—she’s great!).

I apologise for my note of sarcasm there. But I share because I care. It really is an excellent discussion; one that everyone should be able to access.

Update: the bug with that episode of the HTTP 203 podcast has been fixed. Here’s the transcript! And all future episodes will have transcripts too:

Insecure …again

Back in March, I wrote about a dilemma I was facing. I could make the certificates on The Session more secure. But if I did that, people using older Android and iOS devices could no longer access the site:

As a site owner, I can either make security my top priority, which means you’ll no longer be able to access my site. Or I can provide you access, which makes my site less secure for everyone.

In the end, I decided in favour of access. But now this issue has risen from the dead. And this time, it doesn’t matter what I think.

Let’s Encrypt are changing the way their certificates work and once again, it’s people with older devices who are going to suffer:

Most notably, this includes versions of Android prior to 7.1.1. That means those older versions of Android will no longer trust certificates issued by Let’s Encrypt.

This makes me sad. It’s another instance of people being forced to buy new devices. Last time ‘round, my dilemma was choosing between security and access. This time, access isn’t an option. It’s a choice between security and the environment (assuming that people are even in a position to get new devices—not an assumption I’m willing to make).

But this time it’s out of my hands. Let’s Encrypt certificates will stop working on older devices and a whole lotta websites are suddenly going to be inaccessible.

I could look at using a different certificate authority, one I’d have to pay for. It feels a bit galling to have to go back to the scammy world of paying for security—something that Let’s Encrypt has taught us should quite rightly be free. But accessing a website should also be free. It shouldn’t come with the price tag of getting a new device.

Insecure

Universal access is at the heart of the World Wide Web. It’s also something I value when I’m building anything on the web. Whatever I’m building, I want you to be able to visit using whatever browser or device that you choose.

Just to be clear, that doesn’t mean that you’re going to have the same experience in an old browser as you are in the latest version of Firefox or Chrome. Far from it. Not only is that not feasible, I don’t believe it’s desirable either. But if you’re using an old browser, while you might not get to enjoy the newest CSS or JavaScript, you should still be able to access a website.

Applying the principle of progressive enhancement makes this emminently doable. As long as I build in a layered way, everyone gets access to the barebones HTML, even if they can’t experience newer features. Crucially, as long as I’m doing some feature detection, those newer features don’t harm older browsers.

But there’s one area where maintaining backward compatibility might well have an adverse effect on modern browsers: security.

I don’t just mean whether or not you’re serving sites over HTTPS. Even if you’re using TLS—Transport Layer Security—not all security is created equal.

Take a look at Mozilla’s very handy SSL Configuration Generator. You get to choose from three options:

  1. Modern. Services with clients that support TLS 1.3 and don’t need backward compatibility.
  2. Intermediate. General-purpose servers with a variety of clients, recommended for almost all systems.
  3. Old. Compatible with a number of very old clients, and should be used only as a last resort.

Because I value universal access, I should really go for the “old” setting. That ensures my site is accessible all the way back to Android 2.3 and Safari 1. But if I do that, I will be supporting TLS 1.0. That’s not good. My site is potentially vulnerable.

Alright then, I’ll go for “intermediate”—that’s the recommended level anyway. Now I’m no longer providing TLS 1.0 support. But that means some older browsers can no longer access my site.

This is exactly the situation I found myself in with The Session. I had a score of A+ from SSL Labs. I was feeling downright smug. Then I got emails from actual users. One had picked up an old Samsung tablet second hand. Another was using an older version of Safari. Neither could access the site.

Sure enough, if you cut off TLS 1.0, you cut off Safari below version six.

Alright, then. Can’t they just upgrade? Well …no. Apple has tied Safari to OS X. If you can’t upgrade your operating system, you can’t upgrade your browser. So if you’re using OS X Mountain Lion, you’re stuck with an insecure version of Safari.

Fortunately, you can use a different browser. It’s possible to install, say, Firefox 37 which supports TLS 1.2.

On desktop, that is. If you’re using an older iPhone or iPad and you can’t upgrade to a recent version of iOS, you’re screwed.

This isn’t an edge case. This is exactly the kind of usage that iPads excel at: you got the device a few years back just to do some web browsing and not much else. It still seems to work fine, and you have no incentive to buy a brand new iPad. And nor should you have to.

In that situation, you’re stuck using an insecure browser.

As a site owner, I can either make security my top priority, which means you’ll no longer be able to access my site. Or I can provide you access, which makes my site less secure for everyone. (That’s what I’ve done on The Session and now my score is capped at B.)

What I can’t do is tell you to install a different browser, because you literally can’t. Sure, technically you can install something called Firefox from the App Store, or you can install something called Chrome. But neither have anything to do with their desktop counterparts. They’re differently skinned versions of Safari.

Apple refuses to allow browsers with any other rendering engine to be installed. Their reasoning?

Security.

Browser defaults

I’ve been thinking about some of the default behaviours that are built into web browsers.

First off, there’s the decision that a browser makes if you enter a web address without a protocol. Let’s say you type in example.com without specifying whether you’re looking for http://example.com or https://example.com.

Browsers default to HTTP rather than HTTPS. Given that HTTP is older than HTTPS that makes sense. But given that there’s been such a push for TLS on the web, and the huge increase in sites served over HTTPS, I wonder if it’s time to reconsider that default?

Most websites that are served over HTTPS have an automatic redirect from HTTP to HTTPS (enforced with HSTS). There’s an ever so slight performance hit from that, at least for the very first visit. If, when no protocol is specified, browsers were to attempt to reach the HTTPS port first, we’d get a little bit of a speed improvement.

But would that break any existing behaviour? I don’t know. I guess there would be a bit of a performance hit in the other direction. That is, the browser would try HTTPS first, and when that doesn’t exist, go for HTTP. Sites served only over HTTP would suffer that little bit of lag.

Whatever the default behaviour, some sites are going to pay that performance penalty. Right now it’s being paid by sites that are served over HTTPS.

Here’s another browser default that Rob mentioned recently: the viewport meta tag:

I thought I might be able to get away with omitting meta name="viewport". Apparently not! Maybe someday.

This all goes back to the default behaviour of Mobile Safari when the iPhone was first released. Most sites wouldn’t display correctly if one pixel were treated as one pixel. That’s because most sites were built with the assumption that they would be viewed on monitors rather than phones. Only weirdos like me were building sites without that assumption.

So the default behaviour in Mobile Safari is assume a page width of 1024 pixels, and then shrink that down to fit on the screen …unless the developer over-rides that behaviour with a viewport meta tag. That default behaviour was adopted by other mobile browsers. I think it’s a universal default.

But the web has changed since the iPhone was released in 2007. Responsive design has swept the web. What would happen if mobile browsers were to assume width=device-width?

The viewport meta element always felt like a (proprietary) band-aid rather than a long-term solution—for one thing, it’s the kind of presentational information that belongs in CSS rather than HTML. It would be nice if we could bid it farewell.

CSS for all

There have been some great new CSS properties and values shipping in Firefox recently.

Miriam Suzanne explains the difference between the newer revert value and the older inherit, initial and unset values in a video on the Mozilla Developer channel:

display: revert;

In another video, Jen describes some new properties for styling underlines (on links, for example):

text-decoration-thickness:  0.1em;
text-decoration-color: red;
text-underline-offset: 0.2em;
text-decoration-skip-ink: auto;

Great stuff!

As far as I can tell, all of these properties are available to you regardless of whether you are serving your website over HTTP or over HTTPS. That may seem like an odd observation to make, but I invite you to cast your mind back to January 2018. That’s when the Mozilla Security Blog posted about moving to secure contexts everywhere:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

(emphasis mine)

Buzz Lightyear says to Woody: Secure contexts …secure contexts everywhere!

Despite that “effective immediately” clause, I haven’t observed any of the new CSS properties added in the past two years to be restricted to HTTPS. I’m glad about that. I wrote about this announcement at the time:

I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement.

There’s no official word from the Mozilla Security Blog about any change to their two-year old “effective immediately” policy, and the original blog post hasn’t been updated. Maybe we can all just pretend it never happened.

Web talk

At the start of this month I was in Amsterdam for a series of back-to-back events: Indie Web Camp Amsterdam, View Source, and Fronteers. That last one was where Remy and I debuted talk we’d been working on.

The Fronteers folk have been quick off the mark so the video is already available. I’ve also published the text of the talk here:

How We Built The World Wide Web In Five Days

This was a fun talk to put together. The first challenge was figuring out the right format for a two-person talk. It quickly became clear that Remy’s focus would be on the events of the five days we spent at CERN, whereas my focus would be on the history of computing, hypertext, and networks leading up to the creation of the web.

Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.

That worked remarkably well. The talk starts with me describing the creation of CERN in the 1950s. Then Remy talks about the first day of the hack week. I then talk about events in the 1960s. Remy talks about the second day at CERN. This continues until we join up about half way through the talk: I’ve arrived at the moment that Tim Berners-Lee first published the proposal for the World Wide Web, and Remy has arrived at the point of having running code.

At this point, the presentation switches gears and turns into a demo. I do not have the fortitude to do a live demo, so this was all down to Remy. He did it flawlessly. I have so much respect for people brave enough to do live demos, and do them well.

But the talk doesn’t finish there. There’s a coda about our return to CERN a month after the initial hack week. This was an opportunity for both of us to close out the talk with our hopes and dreams for the World Wide Web.

I know I’m biased, but I thought the structure of the presentation worked really well: two interweaving timelines culminating in a demo and finishing with the big picture.

There was a forcing function on preparing this presentation: Remy was moving house, and I was already going to be away speaking at some other events. That limited the amount of time we could be in the same place to practice the talk. In the end, I think that might have helped us make the most of that time.

We were both feeling the pressure to tell this story well—it means so much to us. Personally, I found that presenting with Remy made me up my game. Like I said:

It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.

This talk could have easily turned into a boring slideshow of “what we did on our holidays”, but I think we managed to successfully avoid that trap. We’re both proud of this talk and we’d love to give it again some time. If you’d like it at your event, get in touch.

In the meantime, you can read the text, watch the video, or look at the slides (but the slides really don’t make much sense in isolation).

HTTPS + service worker + web app manifest = progressive web app

I gave a quick talk at the Delta V conference in London last week called Any Site can be a Progressive Web App. I had ten minutes, but frankly I only needed enough time to say the title of the talk because, well, that was also the message.

There’s a common misconception that making a Progressive Web App means creating a Single Page App with an app-shell architecture. But the truth is that literally any website can benefit from the performance boost that results from the combination of HTTPS + Service Worker + Web App Manifest.

See how I define a progressive web app as being HTTPS + service worker + web app manifest? I’ve been doing that for a while. Here’s a post from last year called Progressing the web:

Literally any website can be a progressive web app:

That last step can be tricky if you’re new to service workers, but it’s not unsurmountable. It’s certainly a lot easier than completely rearchitecting your existing website to be a JavaScript-driven single page app.

Later I wrote a post called What is a Progressive Web App? where I compared the definition to responsive web design.

Regardless of the specifics of the name, what I like about Progressive Web Apps is that they have a clear definition. It reminds me of Responsive Web Design. Whatever you think of that name, it comes with a clear list of requirements:

  1. A fluid layout,
  2. Fluid images, and
  3. Media queries.

Likewise, Progressive Web Apps consist of:

  1. HTTPS,
  2. A service worker, and
  3. A Web App Manifest.

There’s more you can do in addition to that (just as there’s plenty more you can do on a responsive site), but the core definition is nice and clear.

But here’s the thing. Outside of the confines of my own website, it’s hard to find that definition anywhere.

On Google’s developer site, their definition uses adjectives like “reliable”, “fast”, and “engaging”. Those are all great adjectives, but more useful to a salesperson than a developer.

Over on the Mozilla Developer Network, their section on progressive web apps states:

Progressive web apps use modern web APIs along with traditional progressive enhancement strategy to create cross-platform web applications. These apps work everywhere and provide several features that give them the same user experience advantages as native apps. 

Hmm …I’m not so sure about that comparison to native apps (and I’m a little disturbed that the URL structure is /Apps/Progressive). So let’s click through to the introduction:

PWAs are web apps developed using a number of specific technologies and standard patterns to allow them to take advantage of both web and native app features.

Okay. Specific technologies. That’s good to hear. But instead of then listing those specific technologies, we’re given another list of adjectives (discoverable, installable, linkable, etc.). Again, like Google’s chosen adjectives, they’re very nice and desirable, but not exactly useful to someone who wants to get started making a progressive web app. That’s why I like to cut to the chase and say:

  • You need to be running on HTTPS,
  • Then you can add a service worker,
  • And don’t forget to add a web app manifest file.

If you do that, you’ve got a progressive web app. Now, to be fair, there’a lot that I’m leaving out. Your site should be fast. Your site should be responsive (it is, after all, on the web). There’s not much point mucking about with service workers if you haven’t sorted out the basics first. But those three things—HTTPS + service worker + web app manifest—are specifically what distinguishes a progressive web app. You can—and should—have a reliable, fast, engaging website before turning it into a progressive web app.

Jason has been thinking about progressive web apps a lot lately (he should write a book or something), and he said to me:

I agree with you on the three things that comprise a PWA, but as far as I can tell, you’re the first to declare it as such.

I was quite surprised by that. I always assumed that I was repeating the three ingredients of a progressive web app, not defining them. But looking through all the docs out there, Jason might be right. It’s surprising because I assumed it was obvious why those three things comprise a progressive web app—it’s because they’re testable.

Lighthouse, PWA Builder, Sonarwhal and other tools that evaluate your site will measure its progressive web app score based on the three defining factors (HTTPS, service worker, web app manifest). Then there’s Android’s Add to Home Screen prompt. Here finally we get a concrete description of what your site needs to do to pass muster:

  • Includes a web app manifest…
  • Served over HTTPS (required for service workers)
  • Has registered a service worker with a fetch event handler

(Although, as of this month, Chrome will no longer show the prompt automatically—you also have to write some JavaScript to handle the beforeinstallprompt  event).

Anyway, if you’re looking to turn your website into a progressive web app, here’s what you need to do (assuming it’s already performant and responsive):

  1. Switch over to HTTPS. Certbot can help you here.
  2. Add a web app manifest.
  3. Add a service worker to your site so that it responds even when there’s no network connection.

That last step might sound like an intimidating prospect, but help is at hand: I wrote Going Offline for exactly this situation.

Ends and means

The latest edition of the excellent History Of The Web newsletter is called The Day(s) The Web Fought Back. It recounts the first time that websites stood up against bad legislation in the form of the Communications Decency Act (CDA), and goes to recount the even more effective use of blackout protests against SOPA and PIPA.

I remember feeling very heartened to see WikiPedia, Google and others take a stand on January 18th, 2012. But I also remember feeling uneasy. In this particular case, companies were lobbying for a cause I agreed with. But what if they were lobbying for a cause I didn’t agree with? Large corporations using their power to influence politics seems like a very bad idea. Isn’t it still a bad idea, even if I happen to agree with the cause?

Cloudflare quite rightly kicked The Daily Stormer off their roster of customers. Then the CEO of Cloudflare quite rightly wrote this in a company-wide memo:

Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.

There’s an uncomfortable tension here. When do the ends justify the means? Isn’t the whole point of having principles that they hold true even in the direst circumstances? Why even claim that corporations shouldn’t influence politics if you’re going to make an exception for net neutrality? Why even claim that free speech is sacrosanct if you make an exception for nazi scum?

Those two examples are pretty extreme and I can easily justify the exceptions to myself. Net neutrality is too important. Stopping fascism is too important. But where do I draw the line? At what point does something become “too important?”

There are more subtle examples of corporations wielding their power. Google are constantly using their monopoly position in search and browser marketshare to exert influence over website-builders. In theory, that’s bad. But in practice, I find myself agreeing with specific instances. Prioritising mobile-friendly sites? Sounds good to me. Penalising intrusive ads? Again, that seems okey-dokey to me. But surely that’s not the point. So what if I happen to agree with the ends being pursued? The fact that a company the size and power of Google is using their monopoly for any influence is worrying, regardless of whether I agree with the specific instances. But I kept my mouth shut.

Now I see Google abusing their monopoly again, this time with AMP. They may call the preferential treatment of Google-hosted AMP-formatted pages a “carrot”, but let’s be honest, it’s an abuse of power, plain and simple.

By the way, I have no doubt that the engineers working on AMP have the best of intentions. We are all pursuing the same ends. We all want a faster web. But we disagree on the means. If Google search results gave preferential treatment to any fast web pages, that would be fine. But by only giving preferential treatment to pages written in a format that they created, and hosted on their own servers, they are effectively forcing everyone to use AMP. I know for a fact that there are plenty of publications who are producing AMP content, not because they are sold on the benefits of the technology, but because they feel strong-armed into doing it in order to compete.

If the ends justify the means, then it’s easy to write off Google’s abuse of power. Those well-intentioned AMP engineers honestly think that they have the best interests of the web at heart:

We were worried about the web not existing anymore due to native apps and walled gardens killing it off. We wanted to make the web competitive. We saw a sense of urgency and thus we decided to build on the extensible web to build AMP instead of waiting for standard and browsers and websites to catch up. I stand behind this process. I’m a practical guy.

There’s real hubris and audacity in thinking that one company should be able to tackle fixing the whole web. I think the AMP team are genuinely upset and hurt that people aren’t cheering them on. Perhaps they will dismiss the criticisms as outpourings of “Why wasn’t I consulted?” But that would be a mistake. The many thoughtful people who are extremely critical of AMP are on the same side as the AMP team when it comes the end-goal of better, faster websites. But burning the web to save it? No thanks.

Ben Thompson goes into more detail on the tension between the ends and the means in The Aggregator Paradox:

The problem with Google’s actions should be obvious: the company is leveraging its monopoly in search to push the AMP format, and the company is leveraging its dominant position in browsers to punish sites with bad ads. That seems bad!

And yet, from a user perspective, the options I presented at the beginning — fast loading web pages with responsive designs that look great on mobile and the elimination of pop-up ads, ad overlays, and autoplaying videos with sounds — sounds pretty appealing!

From that perspective, there’s a moral argument to be made for wielding monopoly power like Google is doing. No doubt the AMP team feel it would be morally wrong for Google not to use its influence in search to give preferential treatment to AMP pages.

Going back to the opening examples of online blackouts, was it morally wrong for companies to use their power to influence politics? Or would it have been morally wrong for them not to have used their influence?

When do the ends justify the means?

Here’s a more subtle example than Google AMP, but one which has me just as worried for the future of the web. Mozilla announced that any new web features they add to their browser will require HTTPS.

The end-goal here is one I agree with: HTTPS everywhere. On the face of it, the means of reaching that goal seem reasonable. After all, we already require HTTPS for sensitive JavaScript APIs like geolocation or service workers. But the devil is in the details:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

Emphasis mine.

This is a step too far. Again, I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement. In some ways, I think it might actually be worse.

There’s an assumption in this decision that websites are being made by professionals who will know how to switch to HTTPS. But the web is for everyone. Not just for everyone to use. It’s for everyone to build.

One of my greatest fears for the web is that building it becomes the domain of a professional priesthood. Anything that raises the bar to writing some HTML or CSS makes me very worried. Usually it’s toolchains that make things more complex, but in this case the barrier to entry is being brought right into the browser itself.

I’m trying to imagine future Codebar evenings, helping people to make their first websites, but now having to tell them that some CSS will be off-limits until they meet the entry requirements of HTTPS …even though CSS and HTTPS have literally nothing to do with one another. (And yes, there will be an exception for localhost and I really hope there’ll be an exception for file: as well, but that’s simply postponing the disappointment.)

No doubt Mozilla (and the W3C Technical Architecture Group) believe that they are doing the right thing. Perhaps they think it would be morally wrong if browsers didn’t enforce HTTPS even for unrelated features like new CSS properties. They believe that, in this particular case, the ends justify the means.

I strongly disagree. If you also disagree, I encourage you to make your voice heard. Remember, this isn’t about whether you think that we should all switch to HTTPS—we’re all in agreement on that. This is about whether it’s okay to create collateral damage by deliberately denying people access to web features in order to further a completely separate agenda.

This isn’t about you or me. This is about all those people who could potentially become makers of the web. We should be welcoming them, not creating barriers for them to overcome.

What is a Progressive Web App?

It seems like any new field goes through an inevitable growth spurt that involves “defining the damn thing.” For the first few years of the IA Summit, every second presentation seemed to be about defining what Information Architecture actually is. See also: UX. See also: Content Strategy.

Now it seems to be happening with Progressive Web Apps …which is odd, considering the damn thing is defined damn well.

I’ve written before about the naming of Progressive Web Apps. On the whole, I think it’s a pretty good term, especially if you’re trying to convince the marketing team.

Regardless of the specifics of the name, what I like about Progressive Web Apps is that they have a clear definition. It reminds me of Responsive Web Design. Whatever you think of that name, it comes with a clear list of requirements:

  1. A fluid layout,
  2. Fluid images, and
  3. Media queries.

Likewise, Progressive Web Apps consist of:

  1. HTTPS,
  2. A service worker, and
  3. A Web App Manifest.

There’s more you can do in addition to that (just as there’s plenty more you can do on a responsive site), but the core definition is nice and clear.

Except, for some reason, that clarity is being lost.

Here’s a post by Ben Halpern called What the heck is a “Progressive Web App”? Seriously.

I have a really hard time describing what a progressive web app actually is.

He points to Google’s intro to Progressive Web Apps:

Progressive Web Apps are user experiences that have the reach of the web, and are:

  • Reliable - Load instantly and never show the downasaur, even in uncertain network conditions.
  • Fast - Respond quickly to user interactions with silky smooth animations and no janky scrolling.
  • Engaging - Feel like a natural app on the device, with an immersive user experience.

Those are great descriptions of the benefits of Progressive Web Apps. Perfect material for convincing your clients or your boss. But that appears on developers.google.com …surely it would be more beneficial for that audience to know the technologies that comprise Progressive Web Apps?

Ben Halpern again:

Google’s continued use of the term “quality” in describing things leaves me with a ton of confusion. It really seems like they want PWA to be a general term that doesn’t imply any particular implementation, and have it be focused around the user experience, but all I see over the web is confusion as to what they mean by these things. My website is already “engaging” and “immersive”, does that mean it’s a PWA?

I think it’s important to use the right language for the right audience.

If you’re talking to the business people, tell them about the return on investment you get from Progressive Web Apps.

If you’re talking to the marketing people, tell them about the experiential benefits of Progressive Web Apps.

But if you’re talking to developers, tell them that a Progressive Web App is a website served over HTTPS with a service worker and manifest file.

Progressing the web

Frances has written up some of the history behind her minting of the term “progressive web app”. She points out that accuracy is secondary to marketing:

I keep seeing folks (developers) getting all smart-ass saying they should have been PW “Sites” not “Apps” but I just want to put on the record that it doesn’t matter. The name isn’t for you and worrying about it is distraction from just building things that work better for everyone. The name is for your boss, for your investor, for your marketeer.

Personally, I think “progressive web app” is a pretty good phrase—two out of three words in it are spot on. I really like the word “progressive”, with its echoes of progressive enhancement. I really, really like the word “web”. But, yeah, I’m one of those smart-asses who points out that the “app” part isn’t great.

That’s not just me being a pedant (or, it’s not only me being a pedant). I’ve seen people who were genuinely put off investigating the technologies behind progressive web apps because of the naming.

Here’s an article with the spot-on title Progressive Web Apps — The Next Step In Responsive Web Design:

Late last week, Smashing Magazine, one of the largest and most influential online publications for web design, posted on Facebook that their website was “now running as a Progressive Web App.”

Honestly, I didn’t think much of it. Progressive Web Apps are for the hardcore web application developers creating the next online cloud-based Photoshop (complicated stuff), right? I scrolled on and went about my day.

And here’s someone feeling the cognitive dissonance of turning a website into a progressive web app, even though that’s exactly the right thing to do:

My personal website is a collection of static HTML files and is also a progressive web app. Transforming it into a progressive web app felt a bit weird in the beginning because it’s not an actual application but I wanted to be one of the cool kids, and PWAs still offer a lot of additional improvements.

Still, it could well be that these are the exceptions and that most people are not being discouraged by the “app” phrasing. I certainly hope that there aren’t more people out there thinking “well, progressive web apps aren’t for me because I’m building a content site.”

In short, the name might not be perfect but it’s pretty damn good.

What I find more troubling is the grouping of unrelated technologies under the “progressive web app” banner. If Google devrel events were anything to go by, you’d be forgiven for thinking that progressive web apps have something to do with AMP or Polymer (they don’t). One of the great things about progressive web apps is that they are agnostic to tech stacks. Still, I totally get why Googlers would want to use the opportunity to point to their other projects.

Far more troubling is the entanglement of the term “progressive web app” with the architectural choice of “single page app”. I’m not the only one who’s worried about this.

Here’s the most egregious example: an article on Hacker Noon called Before You Build a PWA You Need a SPA.

No! Not true! Literally any website can be a progressive web app:

That last step can be tricky if you’re new to service workers, but it’s not unsurmountable. It’s certainly a lot easier than completely rearchitecting your existing website to be a JavaScript-driven single page app.

Alas, I think that many of the initial poster-children for progressive web apps gave the impression that you had to make a completely separate app/site at a different URL. It was like a return to the bad old days of m. sites for mobile. The Washington Post’s progressive web app (currently offline) went so far as to turn away traffic from the “wrong” browsers. This is despite the fact that the very first item in the list of criteria for a progressive web app is:

Responsive: to fit any form factor

Now, I absolutely understand that the immediate priority is to demonstrate that a progressive web app can compete with a native mobile app in terms of features (and trounce it in terms of installation friction). But I’m worried that in our rush to match what native apps can do, we may end up ditching the very features that make the web a universally-accessible medium. Killing URLs simply because native apps don’t have URLs is a classic example of throwing the baby out with the bath water:

Up until now I’ve been a big fan of Progressive Web Apps. I understood them to be combining the best of the web (responsiveness, linkability) with the best of native (installable, connectivity independent). Now I see that balance shifting towards the native end of the scale at the expense of the web’s best features. I’d love to see that balance restored with a little less emphasis on the “Apps” and a little more emphasis on the “Web.” Now that would be progressive.

If the goal of the web is just to compete with native, then we’ve set the bar way too low.

So if you’ve been wary of investing the technologies behind progressive web apps because you’re “just” building a website, please try to see past the name. As Frances says:

It’s marketing, just like HTML5 had very little to do with actual HTML. PWAs are just a bunch of technologies with a zingy-new brandname.

Literally any website can—and should—be a progressive web app. Don’t let anyone tell you otherwise.

I was at an event last year where I heard Chris Heilmann say that you shouldn’t make your blog into a progressive web app. I couldn’t believe what I was hearing. He repeats that message in this video chat:

When somebody, for example, turns their blog into a PWA, I don’t see the point. I don’t want to have that icon on my homepage. This doesn’t make any sense to me.

Excuse me!? Just because you don’t want to have someone’s icon on your home screen, that person shouldn’t be using state-of-the-art technologies!? Excuse my French, but Fuck. That. Shit!

Our imaginations have become so limited by what native mobile apps currently do that we can’t see past merely imitating the status quo like a sad cargo cult.

I don’t want the web to equal native; I want the web to surpass it. I, for one, would prefer a reality where my home screen isn’t filled with the icons of startups and companies that have fulfilled the criteria of the gatekeepers. But a home screen filled with the faces of people who didn’t have to ask anyone’s permission to publish? That’s what I want!

Like Frances says:

Remember, this is for everyone.

Balance

This year’s Render conference just wrapped up in Oxford. It was a well-run, well-curated event, right up my alley: two days of a single track of design and development talks (see also: An Event Apart and Smashing Conference for other events in this mold that get it right).

One of my favourite talks was from Frances Ng. She gave a thoroughly entertaining account of her journey from aerospace engineer to front-end engineer, filled with ideas about how to get started, and keep from getting overwhelmed in the world of the web.

She recommended taking the time to occasionally dive deep into a foundational topic, pointing to another talk as a perfect example; Ana Balica gave a great presentation all about HTTP. The second half of the talk was about HTTP 2 and was filled with practical advice, but the first part was a thoroughly geeky history of the Hypertext Transfer Protocol, which I really loved.

While I’m mentoring Amber, we’ve been trying to find a good balance between those deep dives into the foundational topics and the hands-on day-to-day skills needed for web development. So far, I think we’ve found a good balance.

When Amber is ‘round at the Clearleft office, we sit down together and work on the practical aspects of HTML, CSS, and (soon) JavaScript. Last week, for example, we had a really great day diving into CSS selectors and specificity—I watched Amber’s knowledge skyrocket over the course of the day.

But between those visits—which happen every one or two weeks—I’ve been giving Amber homework of sorts. That’s where the foundational building blocks come in. Here are the questions I’ve asked so far:

  • What is the difference between the internet and the web?
  • What is the difference between GET and POST?
  • What are cookies?

The first question is a way of understanding the primacy of URLs on the web. Amber wrote about her research. The second question was getting at an understanding of HTTP. Amber wrote about that too. The third and current question is about state on the web. I’m looking forward to reading a write-up of that soon.

We’re still figuring out this whole mentorship thing but I think this balance of research and exercises is working out well.

Certbot renewals with Apache

I wrote a while back about switching to HTTPS on Apache 2.4.7 on Ubuntu 14.04 on Digital Ocean. In that post, I pointed to an example .conf file.

I’ve been having a few issues with my certificate renewals with Certbot (the artist formerly known as Let’s Encrypt). If I did a dry-run for renewing my certificates…

/etc/certbot-auto renew --dry-run

… I kept getting this message:

Encountered vhost ambiguity but unable to ask for user guidance in non-interactive mode. Currently Certbot needs each vhost to be in its own conf file, and may need vhosts to be explicitly labelled with ServerName or ServerAlias directories. Falling back to default vhost *:443…

It turns out that Certbot doesn’t like HTTP and HTTPS configurations being lumped into one .conf file. Instead it expects to see all the port 80 stuff in a domain.com.conf file, and the port 443 stuff in a domain.com-ssl.conf file.

So I’ve taken that original .conf file and split it up into two.

First I SSH’d into my server and went to the Apache directory where all these .conf files live:

cd /etc/apache2/sites-available

Then I copied the current (single) file to make the SSL version:

cp yourdomain.com.conf yourdomain.com-ssl.conf

Time to fire up one of those weird text editors to edit that newly-created file:

nano yourdomain.com-ssl.conf

I deleted everything related to port 80—all the stuff between (and including) the VirtualHost *:80 tags:

<VirtualHost *:80>
...
</VirtualHost>

Hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I do the opposite for the original file:

nano yourdomain.com.conf

Delete everything related to VirtualHost *:443:

<VirtualHost *:443>
...
</VirtualHost>

Once again, I hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I need to tell Apache about the new .conf file:

a2ensite yourdomain.com-ssl.conf

I’m told that’s cool and all, but that I need to restart Apache for the changes to take effect:

service apache2 restart

Now when I test the certificate renewing process…

/etc/certbot-auto renew --dry-run

…everything goes according to plan.

Someday

In the latest issue of Justin’s excellent Responsive Web Design weekly newsletter, he includes a segment called “The Snippet Show”:

This is what tells all our browsers on all our devices to set the viewport to be the same width of the current device, and to also set the initial scale to 1 (not scaled at all). This essentially allows us to have responsive design consistently.

<meta name="viewport" content="width=device-width, initial-scale=1">

The viewport value for the meta element was invented by Apple when the iPhone was released. Back then, it was a safe bet that most websites were wider than the iPhone’s 320 pixel wide display—most of them were 960 pixels wide …because reasons. So mobile Safari would automatically shrink those sites down to fit within the display. If you wanted to over-ride that behaviour, you had to use the meta viewport gubbins that they made up.

That was nine years ago. These days, if you’re building a responsive website, you still need to include that meta element.

That seems like a shame to me. I’m not suggesting that the default behaviour should switch to assuming a fluid layout, but maybe the browser could just figure it out. After all, the CSS will already be parsed by the time the HTML is rendering. Perhaps a quick test for the presence of a crawlbar could be used to trigger the shrinking behaviour. No crawlbar, no shrinking.

Maybe someday the assumption behind the current behaviour could be flipped—assume a website is responsive unless the author explicitly requests the shrinking behaviour. I’d like to think that could happen soon, but I suspect that a depressingly large number of sites are still fixed-width (I don’t even want to know—don’t tell me).

There are other browser default behaviours that might someday change. Right now, if I type example.com into a browser, it will first attempt to contact http://example.com rather than https://example.com. That means the example.com server has to do a redirect, costing the user valuable time.

You can mitigate this by putting your site on the HSTS preload list but wouldn’t it be nice if browsers first checked for HTTPS instead of HTTP? I don’t think that will happen anytime soon, but someday …someday.

A little progress

I’ve got a fairly simple posting interface for my notes. A small textarea, an optional file upload, some checkboxes for syndicating to Twitter and Flickr, and a submit button.

Notes posting interface

It works fine although sometimes the experience of uploading a file isn’t great, especially if I’m on a slow connection out and about. I’ve been meaning to add some kind of Ajax-y progress type thingy for the file upload, but never quite got around to it. To be honest, I thought it would be a pain.

But then, in his excellent State Of The Gap hit parade of web technologies, Remy included a simple file upload demo. Turns out that all the goodies that have been added to XMLHttpRequest have made this kind of thing pretty easy (and I’m guessing it’ll be easier still once we have fetch).

I’ve made a little script that adds a progress bar to any forms that are POSTing data.

Feel free to use it, adapt it, and improve it. It isn’t using any ES6iness so there are some obvious candidates for improvement there.

It’s working a treat on my little posting interface. Now I can stare at a slowly-growing progress bar when I’m out and about on a slow connection.

Switching to HTTPS on Apache 2.4.7 on Ubuntu 14.04 on Digital Ocean

I’ve been updating my book sites over to HTTPS:

They’re all hosted on the same (virtual) box as adactio.com—Ubuntu 14.04 running Apache 2.4.7 on Digital Ocean. If you’ve got a similar configuration, this might be useful for you.

First off, I’m using Let’s Encrypt. Except I’m not. It’s called Certbot now (I’m not entirely sure why).

I installed the Let’s Encertbot client with this incantation (which, like everything else here, will need root-level access so if none of these work, retry using sudo in front of the commands):

wget https://dl.eff.org/certbot-auto
chmod a+x certbot-auto

Seems like a good idea to put that certbot-auto thingy into a directory like /etc:

mv certbot-auto /etc

Rather than have Certbot generate conf files for me, I’m just going to have it generate the certificates. Here’s how I’d generate a certificate for yourdomain.com:

/etc/certbot-auto --apache certonly -d yourdomain.com

The first time you do this, it’ll need to fetch a bunch of dependencies and it’ll ask you for an email address for future reference (should anything ever go screwy). For subsequent domains, the process will be much quicker.

The result of this will be a bunch of generated certificates that live here:

  • /etc/letsencrypt/live/yourdomain.com/cert.pem
  • /etc/letsencrypt/live/yourdomain.com/chain.pem
  • /etc/letsencrypt/live/yourdomain.com/privkey.pem
  • /etc/letsencrypt/live/yourdomain.com/fullchain.pem

Now you’ll need to configure your Apache gubbins. Head on over to…

cd /etc/apache2/sites-available

If you only have one domain on your server, you can just edit default.ssl.conf. I prefer to have separate conf files for each domain.

Time to fire up an incomprehensible text editor.

nano yourdomain.com.conf

There’s a great SSL Configuration Generator from Mozilla to help you figure out what to put in this file. Following the suggested configuration for my server (assuming I want maximum backward-compatibility), here’s what I put in.

Make sure you update the /path/to/yourdomain.com part—you probably want a directory somewhere in /var/www or wherever your website’s files are sitting.

To exit the infernal text editor, hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

If the yourdomain.com.conf didn’t previously exist, you’ll need to enable the configuration by running:

a2ensite yourdomain.com

Time to restart Apache. Fingers crossed…

service apache2 restart

If that worked, you should be able to go to https://yourdomain.com and see a lovely shiny padlock in the address bar.

Assuming that worked, everything is awesome! …for 90 days. After that, your certificates will expire and you’ll be left with a broken website.

Not to worry. You can update your certificates at any time. Test for yourself by doing a dry run:

/etc/certbot-auto renew --dry-run

You should see a message saying:

Processing /etc/letsencrypt/renewal/yourdomain.com.conf

And then, after a while:

** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
Congratulations, all renewals succeeded.

You could set yourself a calendar reminder to do the renewal (without the --dry-run bit) every few months. Or you could tell your server’s computer to do it by using a cron job. It’s not nearly as rude as it sounds.

You can fire up and edit your list of cron tasks with this command:

crontab -e

This tells the machine to run the renewal task at quarter past six every evening and log any results:

15 18 * * * /etc/certbot-auto renew --quiet >> /var/log/certbot-renew.log

(Don’t worry: it won’t actually generate new certificates unless the current ones are getting close to expiration.) Leave the cronrab editor by doing the ctrl o, enter, ctrl x dance.

Hopefully, there’s nothing more for you to do. I say “hopefully” because I won’t know for sure myself for another 90 days, at which point I’ll find out whether anything’s on fire.

If you have other domains you want to secure, repeat the process by running:

/etc/certbot-auto --apache certonly -d yourotherdomain.com

And then creating/editing /etc/apache2/sites-available/yourotherdomain.com.conf accordingly.

I found these useful when I was going through this process:

That last one is good if you like the warm glow of accomplishment that comes with getting a good grade:

For extra credit, you can run your site through securityheaders.io to harden your headers. Again, not as rude as it sounds.

You know, I probably should have said this at the start of this post, but I should clarify that any advice I’ve given here should be taken with a huge pinch of salt—I have little to no idea what I’m doing. I’m not responsible for any flame-bursting-into that may occur. It’s probably a good idea to back everything up before even starting to do this.

Yeah, I definitely should’ve mentioned that at the start.

This is for everyone with a certificate

Mozilla—like Google before them—have announced their plans for deprecating HTTP in favour of HTTPS. I’m all in favour of moving to HTTPS. I’ve done it myself here on adactio.com, on thesession.org, and on huffduffer.com. I have some concerns about the potential linkrot involved in the move to TLS everywhere—as outlined by Tim Berners-Lee—but still, anything that makes the work of GCHQ and the NSA more difficult is alright by me.

But I have a big, big problem with Mozilla’s plan to “encourage” the move to HTTPS:

Gradually phasing out access to browser features.

Requiring HTTPS for certain browser features makes total sense, given the security implications. Service Workers, for example, are quite correctly only available over HTTPS. Any API that has access to a device sensor—or that could be used for fingerprinting in any way—should only be available over HTTPS. In retrospect, Geolocation should have been HTTPS-only from the beginning.

But to deny access to APIs where there are no security concerns, where it is merely a stick to beat people with …that’s just wrong.

This is for everyone. Not just those smart enough to figure out how to add HTTPS to their site. And yes, I know, the theory is that is that it’s going to get easier and easier, but so far the steps towards making HTTPS easier are just vapourware. That makes Mozilla’s plan look like something drafted by underwear gnomes.

The issue here is timing. Let’s make HTTPS easy first. Then we can start to talk about ways of encouraging adoption. Hopefully we can figure out a way that doesn’t require Mozilla or Google as gatekeepers.

Sven Slootweg outlines the problems with Mozilla’s forced SSL. I highly recommend reading Yoav’s post on deprecating HTTP too. Ben Klemens has written about HTTPS: the end of an era …that era being the one in which anyone could make a website without having to ask permission from an app store, a certificate authority, or a browser manufacturer.

On the other hand, Eric Mill wrote We’re Deprecating HTTP And It’s Going To Be Okay. It makes for an extremely infuriating read because it outlines all the ways in which HTTPS is a good thing (all of which I agree with) without once addressing the issue at hand—a browser that deliberately cripples its feature set for political reasons.

100 words 042

I spent most of today making a Single Page App.

The content is loaded into the page using a low-level transport mechanism called HTTP (the great thing about using this protocol is that you get URL routing for free). I bucked the trend and decided not to encode the content in JSON. Instead it’s contained in a text format called HTML.

There is some asynchronous loading involved for the rich media; that’s accomplished using a feature of HTML known as the img element.

I’m pretty pleased with the results. The whole thing is scrolling smoothly at sixty frames per second.

HTTPS

Tim Berners-Lee is quite rightly worried about linkrot:

The disappearance of web material and the rotting of links is itself a major problem.

He brings up an interesting point that I hadn’t fully considered: as more and more sites migrate from HTTP to HTTPS (A Good Thing), and the W3C encourages this move, isn’t there a danger of creating even more linkrot?

…perhaps doing more damage to the web than any other change in its history.

I think that may be a bit overstated. As many others point out, almost all sites making the switch are conscientious about maintaining redirects with a 301 status code.

(There’s also a similar 308 status code that I hadn’t come across, but after a bit of investigating, that looks to be a bit of mess.)

Anyway, the discussion does bring up some interesting points. Transport Layer Security is something that’s handled between the browser and the server—does it really need to be visible in the protocol portion of the URL? Or is that visibility a positive attribute that makes it clear that the URL is “good”?

And as more sites move to HTTPS, should browsers change their default behaviour? Right now, typing “example.com” into a browser’s address bar will cause it to automatically expand to http://example.com …shouldn’t browsers look for https://example.com first?

All good food for thought.

There’s a Google Doc out there with some advice for migrating to HTTPS. Unfortunately, the trickiest part—getting and installing certificates—is currently an owl-drawing tutorial, but hopefully it will get expanded.

If you’re looking for even more reasons why enabling TLS for your site is a good idea, look no further than the latest shenanigans from ISPs in the UK (we lost the battle for net neutrality in this country some time ago).

They can’t do that to pages served over HTTPS.