Journal tags: ssl

9

Bookmarklets for testing your website

I’m at day two of Indie Web Camp Brighton.

Day one was excellent. It was really hard to choose which sessions to go to because they all sounded interesting. That’s a good problem to have.

I ended up participating in:

  • a session on POSSE,
  • a session on NFC tags,
  • a session on writing, and
  • a session on testing your website that was hosted by Ros

In that testing session I shared some of the bookmarklets I use regularly.

Bookmarklets? They’re bookmarks that sit in the toolbar of your desktop browser. Just like any other bookmark, they’re links. The difference is that these links begin with javascript: rather than http. That means you can put programmatic instructions inside the link. Click the bookmark and the JavaScript gets executed.

In my mind, there are two different approaches to making a bookmarklet. One kind of bookmarklet contains lots of clever JavaScript—that’s where the smart stuff happens. The other kind of bookmarklet is deliberately dumb. All they do is take the URL of the current page and pass it to another service—that’s where the smart stuff happens.

I like that second kind of bookmarklet.

Here are some bookmarklets I’ve made. You can drag any of them up to the toolbar of your browser. Or you could create a folder called, say, “bookmarklets”, and drag these links up there.

Validation: This bookmarklet will validate the HTML of whatever page you’re on.

Validate HTML

Carbon: This bookmarklet will run the domain through the website carbon calculator.

Calculate carbon

Accessibility: This bookmarklet will run the current page through the Website Accessibility Evaluation Tools.

WAVE

Performance: This bookmarklet will take the current page and it run it through PageSpeed Insights, which includes a Lighthouse test.

PageSpeed

HTTPS: This bookmarklet will run your site through the SSL checker from SSL Labs.

SSL Report

Headers: This bookmarklet will test the security headers on your website.

Security Headers

Drag any of those links to your browser’s toolbar to “install” them. If you don’t like one, you can delete it the same way you can delete any other bookmark.

Insecure …again

Back in March, I wrote about a dilemma I was facing. I could make the certificates on The Session more secure. But if I did that, people using older Android and iOS devices could no longer access the site:

As a site owner, I can either make security my top priority, which means you’ll no longer be able to access my site. Or I can provide you access, which makes my site less secure for everyone.

In the end, I decided in favour of access. But now this issue has risen from the dead. And this time, it doesn’t matter what I think.

Let’s Encrypt are changing the way their certificates work and once again, it’s people with older devices who are going to suffer:

Most notably, this includes versions of Android prior to 7.1.1. That means those older versions of Android will no longer trust certificates issued by Let’s Encrypt.

This makes me sad. It’s another instance of people being forced to buy new devices. Last time ‘round, my dilemma was choosing between security and access. This time, access isn’t an option. It’s a choice between security and the environment (assuming that people are even in a position to get new devices—not an assumption I’m willing to make).

But this time it’s out of my hands. Let’s Encrypt certificates will stop working on older devices and a whole lotta websites are suddenly going to be inaccessible.

I could look at using a different certificate authority, one I’d have to pay for. It feels a bit galling to have to go back to the scammy world of paying for security—something that Let’s Encrypt has taught us should quite rightly be free. But accessing a website should also be free. It shouldn’t come with the price tag of getting a new device.

Insecure

Universal access is at the heart of the World Wide Web. It’s also something I value when I’m building anything on the web. Whatever I’m building, I want you to be able to visit using whatever browser or device that you choose.

Just to be clear, that doesn’t mean that you’re going to have the same experience in an old browser as you are in the latest version of Firefox or Chrome. Far from it. Not only is that not feasible, I don’t believe it’s desirable either. But if you’re using an old browser, while you might not get to enjoy the newest CSS or JavaScript, you should still be able to access a website.

Applying the principle of progressive enhancement makes this emminently doable. As long as I build in a layered way, everyone gets access to the barebones HTML, even if they can’t experience newer features. Crucially, as long as I’m doing some feature detection, those newer features don’t harm older browsers.

But there’s one area where maintaining backward compatibility might well have an adverse effect on modern browsers: security.

I don’t just mean whether or not you’re serving sites over HTTPS. Even if you’re using TLS—Transport Layer Security—not all security is created equal.

Take a look at Mozilla’s very handy SSL Configuration Generator. You get to choose from three options:

  1. Modern. Services with clients that support TLS 1.3 and don’t need backward compatibility.
  2. Intermediate. General-purpose servers with a variety of clients, recommended for almost all systems.
  3. Old. Compatible with a number of very old clients, and should be used only as a last resort.

Because I value universal access, I should really go for the “old” setting. That ensures my site is accessible all the way back to Android 2.3 and Safari 1. But if I do that, I will be supporting TLS 1.0. That’s not good. My site is potentially vulnerable.

Alright then, I’ll go for “intermediate”—that’s the recommended level anyway. Now I’m no longer providing TLS 1.0 support. But that means some older browsers can no longer access my site.

This is exactly the situation I found myself in with The Session. I had a score of A+ from SSL Labs. I was feeling downright smug. Then I got emails from actual users. One had picked up an old Samsung tablet second hand. Another was using an older version of Safari. Neither could access the site.

Sure enough, if you cut off TLS 1.0, you cut off Safari below version six.

Alright, then. Can’t they just upgrade? Well …no. Apple has tied Safari to OS X. If you can’t upgrade your operating system, you can’t upgrade your browser. So if you’re using OS X Mountain Lion, you’re stuck with an insecure version of Safari.

Fortunately, you can use a different browser. It’s possible to install, say, Firefox 37 which supports TLS 1.2.

On desktop, that is. If you’re using an older iPhone or iPad and you can’t upgrade to a recent version of iOS, you’re screwed.

This isn’t an edge case. This is exactly the kind of usage that iPads excel at: you got the device a few years back just to do some web browsing and not much else. It still seems to work fine, and you have no incentive to buy a brand new iPad. And nor should you have to.

In that situation, you’re stuck using an insecure browser.

As a site owner, I can either make security my top priority, which means you’ll no longer be able to access my site. Or I can provide you access, which makes my site less secure for everyone. (That’s what I’ve done on The Session and now my score is capped at B.)

What I can’t do is tell you to install a different browser, because you literally can’t. Sure, technically you can install something called Firefox from the App Store, or you can install something called Chrome. But neither have anything to do with their desktop counterparts. They’re differently skinned versions of Safari.

Apple refuses to allow browsers with any other rendering engine to be installed. Their reasoning?

Security.

Certbot renewals with Apache

I wrote a while back about switching to HTTPS on Apache 2.4.7 on Ubuntu 14.04 on Digital Ocean. In that post, I pointed to an example .conf file.

I’ve been having a few issues with my certificate renewals with Certbot (the artist formerly known as Let’s Encrypt). If I did a dry-run for renewing my certificates…

/etc/certbot-auto renew --dry-run

… I kept getting this message:

Encountered vhost ambiguity but unable to ask for user guidance in non-interactive mode. Currently Certbot needs each vhost to be in its own conf file, and may need vhosts to be explicitly labelled with ServerName or ServerAlias directories. Falling back to default vhost *:443…

It turns out that Certbot doesn’t like HTTP and HTTPS configurations being lumped into one .conf file. Instead it expects to see all the port 80 stuff in a domain.com.conf file, and the port 443 stuff in a domain.com-ssl.conf file.

So I’ve taken that original .conf file and split it up into two.

First I SSH’d into my server and went to the Apache directory where all these .conf files live:

cd /etc/apache2/sites-available

Then I copied the current (single) file to make the SSL version:

cp yourdomain.com.conf yourdomain.com-ssl.conf

Time to fire up one of those weird text editors to edit that newly-created file:

nano yourdomain.com-ssl.conf

I deleted everything related to port 80—all the stuff between (and including) the VirtualHost *:80 tags:

<VirtualHost *:80>
...
</VirtualHost>

Hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I do the opposite for the original file:

nano yourdomain.com.conf

Delete everything related to VirtualHost *:443:

<VirtualHost *:443>
...
</VirtualHost>

Once again, I hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I need to tell Apache about the new .conf file:

a2ensite yourdomain.com-ssl.conf

I’m told that’s cool and all, but that I need to restart Apache for the changes to take effect:

service apache2 restart

Now when I test the certificate renewing process…

/etc/certbot-auto renew --dry-run

…everything goes according to plan.

Switching to HTTPS on Apache 2.4.7 on Ubuntu 14.04 on Digital Ocean

I’ve been updating my book sites over to HTTPS:

They’re all hosted on the same (virtual) box as adactio.com—Ubuntu 14.04 running Apache 2.4.7 on Digital Ocean. If you’ve got a similar configuration, this might be useful for you.

First off, I’m using Let’s Encrypt. Except I’m not. It’s called Certbot now (I’m not entirely sure why).

I installed the Let’s Encertbot client with this incantation (which, like everything else here, will need root-level access so if none of these work, retry using sudo in front of the commands):

wget https://dl.eff.org/certbot-auto
chmod a+x certbot-auto

Seems like a good idea to put that certbot-auto thingy into a directory like /etc:

mv certbot-auto /etc

Rather than have Certbot generate conf files for me, I’m just going to have it generate the certificates. Here’s how I’d generate a certificate for yourdomain.com:

/etc/certbot-auto --apache certonly -d yourdomain.com

The first time you do this, it’ll need to fetch a bunch of dependencies and it’ll ask you for an email address for future reference (should anything ever go screwy). For subsequent domains, the process will be much quicker.

The result of this will be a bunch of generated certificates that live here:

  • /etc/letsencrypt/live/yourdomain.com/cert.pem
  • /etc/letsencrypt/live/yourdomain.com/chain.pem
  • /etc/letsencrypt/live/yourdomain.com/privkey.pem
  • /etc/letsencrypt/live/yourdomain.com/fullchain.pem

Now you’ll need to configure your Apache gubbins. Head on over to…

cd /etc/apache2/sites-available

If you only have one domain on your server, you can just edit default.ssl.conf. I prefer to have separate conf files for each domain.

Time to fire up an incomprehensible text editor.

nano yourdomain.com.conf

There’s a great SSL Configuration Generator from Mozilla to help you figure out what to put in this file. Following the suggested configuration for my server (assuming I want maximum backward-compatibility), here’s what I put in.

Make sure you update the /path/to/yourdomain.com part—you probably want a directory somewhere in /var/www or wherever your website’s files are sitting.

To exit the infernal text editor, hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

If the yourdomain.com.conf didn’t previously exist, you’ll need to enable the configuration by running:

a2ensite yourdomain.com

Time to restart Apache. Fingers crossed…

service apache2 restart

If that worked, you should be able to go to https://yourdomain.com and see a lovely shiny padlock in the address bar.

Assuming that worked, everything is awesome! …for 90 days. After that, your certificates will expire and you’ll be left with a broken website.

Not to worry. You can update your certificates at any time. Test for yourself by doing a dry run:

/etc/certbot-auto renew --dry-run

You should see a message saying:

Processing /etc/letsencrypt/renewal/yourdomain.com.conf

And then, after a while:

** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
Congratulations, all renewals succeeded.

You could set yourself a calendar reminder to do the renewal (without the --dry-run bit) every few months. Or you could tell your server’s computer to do it by using a cron job. It’s not nearly as rude as it sounds.

You can fire up and edit your list of cron tasks with this command:

crontab -e

This tells the machine to run the renewal task at quarter past six every evening and log any results:

15 18 * * * /etc/certbot-auto renew --quiet >> /var/log/certbot-renew.log

(Don’t worry: it won’t actually generate new certificates unless the current ones are getting close to expiration.) Leave the cronrab editor by doing the ctrl o, enter, ctrl x dance.

Hopefully, there’s nothing more for you to do. I say “hopefully” because I won’t know for sure myself for another 90 days, at which point I’ll find out whether anything’s on fire.

If you have other domains you want to secure, repeat the process by running:

/etc/certbot-auto --apache certonly -d yourotherdomain.com

And then creating/editing /etc/apache2/sites-available/yourotherdomain.com.conf accordingly.

I found these useful when I was going through this process:

That last one is good if you like the warm glow of accomplishment that comes with getting a good grade:

For extra credit, you can run your site through securityheaders.io to harden your headers. Again, not as rude as it sounds.

You know, I probably should have said this at the start of this post, but I should clarify that any advice I’ve given here should be taken with a huge pinch of salt—I have little to no idea what I’m doing. I’m not responsible for any flame-bursting-into that may occur. It’s probably a good idea to back everything up before even starting to do this.

Yeah, I definitely should’ve mentioned that at the start.

What a day out! What a lovely responsive day out!

The third and final Responsive Day Out is done and dusted. In short, it was fantastic. Every single talk was superb. Statistically that seems highly unlikely, but it’s true.

I was quite overcome by the outpouring of warmth and all the positive feedback I got from the attendees. That made me feel really good, if a little guilty. Guilty because the truth is that I don’t really consider the attendees when I’m putting the line-up together. Instead I take much greedier approach: I ask “who do I want to hear speak?” Still, it’s nice to know that there’s so much overlap in our collective opinion.

Despite the overwhelmingly positive reaction to the day, I had a couple of complaints myself, and they’re both related to the venue. My issues were with:

  1. the seats and
  2. the temperature.

The tiered seating in the Corn Exchange is great for giving everyone in the audience a good view, but the seats are awfully close together. That leaves taller people with some sore knees.

And the problem with having a conference in the middle of June is that, if the weather is good—which I’m glad it was—the Corn Exchange can get awfully hot and sweaty in the latter half of the day.

Both those issues would be solved by using a more salubrious venue, like the main Brighton Dome itself, but then that would also mean a doubling of the cost per ticket (hence why dConstruct and Responsive Day Out are in different price ranges). And one of the big attractions of Responsive Day Out is its ludicrously cheap ticket price. That meant sacrificing a lot of comforts—I just wish that comfortable seats and air temperature weren’t amongst them.

Still. Listen to me moaning about the things I didn’t like when in fact the day was really, really wonderful.

Orde liveblogged every single talk and Hidde wrote an in-depth overview of the whole day. If you were there, I would love it if you would share your thoughts, preferably on your own website.

Guess what? The audio from all the talks is already online. As always, Drew did an amazing job. You can subscribe to the RSS feed in your podcatching software of choice. Videos will be available after a while, but for now you’ll have to make do with the audio.

Oh, and speaking of audio, if you liked the music that was playing in the breaks, here’s the playlist. My thanks to all the artists for licensing their work under a Creative Commons license so that I could dodge one more expense that would otherwise have to be passed on to the ticket price.

Now. The number one question that people were asking me at the pub afterwards was “why is this the last one?” I really should’ve addressed that during my closing remarks.

But here’s the thing: the first Responsive Day Out was intended as a one-off. So really the question should be: why were there three? To which I have no good answer other than to say it felt about right. With three of them, it gave just about everyone a chance to get to at least one. If you didn’t make it to any of the responsive days out, well …you’ve only got yourself to blame.

If we ended up having Responsive Day Out 7 or 8, then something would have gone horribly wrong with the world of web design and development. The truth is that responsive web design is just plain ol’ web design: it’s the new normal. I guess the term “responsive” makes for a nice hook to hang a day’s talks off, but the truth is that, even by the third event, the specific connections to responsive design were getting more tenuous. There was plenty about accessibility, progressive enhancement, and the latest CSS and JavaScript APIs: all those things are enormously valuable when it comes to responsive web design …because all of those things are enormously valuable when it comes to just plain ol’ web web design.

In the end, I’m glad that I ended up doing three events. Now I can see the arc of all the events as one. Listening back to all the talks from all three years you can hear the trajectory from “ARGH! This responsive design stuff is really scary! How will we cope‽” to “Hey, this responsive design stuff is the way we do things now.” There are still many, many challenges of course, but the question is no longer if responsive design is the way to go. Instead we can talk about how we can help one other do it well.

At the end of the third and final Responsive Day Out, I thanked all the speakers from all three events. It’s quite a roll-call. And it was immensely gratifying to see so many of the names from previous years in the audience at the final event.

I am sincerely grateful to:

  • Sarah Parmenter,
  • David Bushell,
  • Tom Maslen,
  • Richard Rutter,
  • Josh Emerson,
  • Laura Kalbag,
  • Elliot Jay Stocks,
  • Anna Debenham,
  • Andy Hume,
  • Bruce Lawson,
  • Owen Gregory,
  • Paul Lloyd,
  • Mark Boulton,
  • Stephen Hay,
  • Sally Jenkinson,
  • Ida Aalen,
  • Rachel Andrew,
  • Dan Donald,
  • Inayaili de León Persson,
  • Oliver Reichenstein,
  • Kirsty Burgoine,
  • Stephanie Rieger,
  • Ethan Marcotte,
  • Alice Bartlett,
  • Rachel Shillcock,
  • Alla Kholmatova,
  • Peter Gasston,
  • Jason Grigsby,
  • Heydon Pickering,
  • Jake Archibald,
  • Ruth John,
  • Zoe Mickley Gillenwater,
  • Rosie Campbell,
  • Lyza Gardner, and
  • Aaron Gustafson.

Many thanks also to everyone who came along to the events, especially the hat-trickers who made it to all three.

I’ve organised a total of six conferences now and I’m extremely proud of all of them:

  1. dConstruct 2012: Playing With The Future,
  2. the first Responsive Day Out,
  3. dConstruct 2013: Communicating With Machines,
  4. Responsive Day Out 2: The Squishening,
  5. dConstruct 2014: Living With The Network, and
  6. Responsive Day Out 3: The Final Breakpoint.

…but they’ve also been a lot of work. dConstruct in particular took a lot out of me last year. That’s why I’m not involved with this year’s event—Andy has taken the reins instead. By comparison, Responsive Day Out is a much more low-key affair; not nearly as stressful to put together. Still, three in a row is plenty. It’s time to end it on a hell of a high note.

That’s not to say I won’t be organising some other event sometime in the future. Maybe I’ll even revive the format of Responsive Day Out—three back-to-back 20 minute talks makes for an unbeatable firehose of knowledge. But for now, I’m going to take a little break from event-organising.

Besides, it’s not as though Responsive Day Out is really gone. Its spirit lives on in its US equivalent, Responsive Field Day in Portland in September.

This is for everyone with a certificate

Mozilla—like Google before them—have announced their plans for deprecating HTTP in favour of HTTPS. I’m all in favour of moving to HTTPS. I’ve done it myself here on adactio.com, on thesession.org, and on huffduffer.com. I have some concerns about the potential linkrot involved in the move to TLS everywhere—as outlined by Tim Berners-Lee—but still, anything that makes the work of GCHQ and the NSA more difficult is alright by me.

But I have a big, big problem with Mozilla’s plan to “encourage” the move to HTTPS:

Gradually phasing out access to browser features.

Requiring HTTPS for certain browser features makes total sense, given the security implications. Service Workers, for example, are quite correctly only available over HTTPS. Any API that has access to a device sensor—or that could be used for fingerprinting in any way—should only be available over HTTPS. In retrospect, Geolocation should have been HTTPS-only from the beginning.

But to deny access to APIs where there are no security concerns, where it is merely a stick to beat people with …that’s just wrong.

This is for everyone. Not just those smart enough to figure out how to add HTTPS to their site. And yes, I know, the theory is that is that it’s going to get easier and easier, but so far the steps towards making HTTPS easier are just vapourware. That makes Mozilla’s plan look like something drafted by underwear gnomes.

The issue here is timing. Let’s make HTTPS easy first. Then we can start to talk about ways of encouraging adoption. Hopefully we can figure out a way that doesn’t require Mozilla or Google as gatekeepers.

Sven Slootweg outlines the problems with Mozilla’s forced SSL. I highly recommend reading Yoav’s post on deprecating HTTP too. Ben Klemens has written about HTTPS: the end of an era …that era being the one in which anyone could make a website without having to ask permission from an app store, a certificate authority, or a browser manufacturer.

On the other hand, Eric Mill wrote We’re Deprecating HTTP And It’s Going To Be Okay. It makes for an extremely infuriating read because it outlines all the ways in which HTTPS is a good thing (all of which I agree with) without once addressing the issue at hand—a browser that deliberately cripples its feature set for political reasons.

HTTPS

Tim Berners-Lee is quite rightly worried about linkrot:

The disappearance of web material and the rotting of links is itself a major problem.

He brings up an interesting point that I hadn’t fully considered: as more and more sites migrate from HTTP to HTTPS (A Good Thing), and the W3C encourages this move, isn’t there a danger of creating even more linkrot?

…perhaps doing more damage to the web than any other change in its history.

I think that may be a bit overstated. As many others point out, almost all sites making the switch are conscientious about maintaining redirects with a 301 status code.

(There’s also a similar 308 status code that I hadn’t come across, but after a bit of investigating, that looks to be a bit of mess.)

Anyway, the discussion does bring up some interesting points. Transport Layer Security is something that’s handled between the browser and the server—does it really need to be visible in the protocol portion of the URL? Or is that visibility a positive attribute that makes it clear that the URL is “good”?

And as more sites move to HTTPS, should browsers change their default behaviour? Right now, typing “example.com” into a browser’s address bar will cause it to automatically expand to http://example.com …shouldn’t browsers look for https://example.com first?

All good food for thought.

There’s a Google Doc out there with some advice for migrating to HTTPS. Unfortunately, the trickiest part—getting and installing certificates—is currently an owl-drawing tutorial, but hopefully it will get expanded.

If you’re looking for even more reasons why enabling TLS for your site is a good idea, look no further than the latest shenanigans from ISPs in the UK (we lost the battle for net neutrality in this country some time ago).

They can’t do that to pages served over HTTPS.

Security for all

Throughout the Brighton Digital Festival, Lighthouse Arts will be exhibiting a project from Julian Oliver and Danja Vasiliev called Newstweek. If you’re in town for dConstruct—and you should be—you ought to stop by and check it out.

It’s a mischievous little hardware hack intended for use in places with public WiFi. If you’ve got a Newstweek device, you can alter the content of web pages like, say, BBC News. Cheeky!

There’s one catch though. Newstweek works on http:// domains, not https://. This is exactly the scenario that Jake has been talking about:

SSL is also useful to ensure the data you’re receiving hasn’t been tampered with. It’s not just for user->server stuff

eg, when you visit http://www.theguardian.com/uk , you don’t really know it hasn’t been modified to tell a different story

There’s another good reason for switching to TLS. It would make life harder for GCHQ and the NSA—not impossible, but harder. It’s not a panacea, but it would help make our collectively-held network more secure, as per RFC 7258 from the Internet Engineering Task Force:

Pervasive monitoring is a technical attack that should be mitigated in the design of IETF protocols, where possible.

I’m all for using https:// instead of http:// but there’s a problem. It’s bloody difficult!

If you’re a sysadmin type that lives in the command line, then it’s probably not difficult at all. But for the rest of us mere mortals who just want to publish something on the web, it’s intimidatingly daunting.

Tim Bray says:

It’ll cost you <$100/yr plus a half-hour of server reconfiguration. I don’t see any excuse not to.

…but then, he also thought that anyone who can’t make a syndication feed that’s well-formed XML is an incompetent fool (whereas I ended up creating an entire service to save people from having to make RSS feeds by hand).

Google are now making SSL a ranking factor in their search results, which is their prerogative. If it results in worse search results, other search engines are available. But I don’t think it will have significant impact. Jake again:

if two pages have equal ranking except one is served securely, which do you think should appear first in results?

Ashe Dryden disagrees:

Google will be promoting SSL sites above those without, effectively doing the exact same thing we’re upset about the lack of net neutrality.

I don’t think that’s quite fair: if Google were an ISP slowing down http:// requests, that would be extremely worrying, but tweaking its already-opaque search algorithm isn’t quite the same.

Mind you, I do like this suggestion:

I think if Google is going to penalize you for not having SSL they should become a CA and issue free certs.

I’m more concerned by the discussions at Chrome and Mozilla about flagging up http:// connections as unsafe. While the approach is technically correct, I fear it could have the opposite of its intended effect. With so many sites still served over http://, users would be bombarded with constant messages of unsafe connections. Before long they would develop security blindness in much the same way that we’ve all developed banner-ad blindness.

My main issue—apart from the fact that I personally don’t have the necessary smarts to enable TLS—is related to what Ashe is concerned about:

Businesses and individuals who both know about and can afford to have SSL in place will be ranked above those who don’t/can’t.

I strongly believe that anyone should be able to publish on the web. That’s one of the reasons why I don’t share my fellow developers’ zeal for moving everything to JavaScript; I want anybody—not just programmers—to be able to share what they know. Hence my preference for simpler declarative languages like HTML and CSS (and my belief that they should remain simple and learnable).

It’s already too damn complex to register a domain and host a website. Adding one more roadblock isn’t going to help that situation. Just ask Drew and Rachel what it’s like trying to just make sure that their customers have a version of PHP from this decade.

I want a secure web. I’d really like the web to be https:// only. But until we get there, I really don’t like the thought of the web being divided into the haves and have-nots.

Still…

There is an enormous opportunity here, as John pointed out on a recent episode of The Web Ahead. Getting TLS set up is a pain point for a lot of people, not just me. Where there’s pain, there’s an opportunity to provide a service that removes the pain. Services like Squarespace are already taking the pain out of setting up a website. I’d like to see somebody provide a TLS valet service.

(And before you rush to tell me about the super-easy SSL-setup tutorial you know about, please stop and think about whether it’s actually more like this.)

I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.

For all of you budding entrepreneurs looking for the next big thing to “disrupt”, please consider making your money not from the gold rush itself, but from providing the shovels.