Journal tags: urls

28

Increment by increment

The bedrock of the World Wide Web is solid. Built atop the protocols of the internet (TCP/IP), its fundamental building blocks remain: URLs of HTML files transmitted over HTTP. Baldur Bjarnason writes:

Even today, the web is like living fossil, a preserved relic from a different era. Anybody can put up a website. Anybody can run a business over it. I can build an app or service, send the URL to anybody I like, and most people in the world will be able to run it without asking anybody’s permission.

Still, the web has evolved. In fact, that evolution is something that’s also built into its fundamental design. Rather than try to optimise the World Wide Web for one particular use-case, Tim Berners-Lee realised the power of being flexible. Like the internet, the World Wide Web is deliberately dumb.

(I get very annoyed when people talk about the web as being designed for scientific work at CERN. That was merely the first use-case. The web was designed for everything …and nothing in particular.)

Robin Berjon compares the web’s evolution to the ship of Theseus:

That’s why it’s been so hard to agree about what the Web is: the Web is architected for resilience which means that it adapts and transforms. That flexibility is the reason why I’m talking about some mythological dude’s boat. Altogether too often, we consider some aspects of the Web as being invariants when they’re potentially just as replaceable as any other part. This isn’t to say that there are no invariants on the Web.

The web can be changed. That’s both a comfort and a warning. There’s plenty that we should change about today’s web. But there’s also plenty—at the root level—that we should fight to preserve.

And if you want change, the worst way to go about it is to promulgate the notion of burning everything down and starting from scratch. As Erin says in the fourth and final part of her devastating series on Meta in Myanmar:

We don’t get a do-over planet. We won’t get a do-over network.

Instead, we have to work with the internet we made and find a way to rebuild and fortify it to support the much larger projects of repair—political, cultural, environmental—that are required for our survival.

Though, as Robin points out, that doesn’t preclude us from sharing a vision:

Proceeding via small, incremental changes can be a laudable approach, but even then it helps to have a sense for what it is that those small steps are supposed to be incrementing towards.

I’m looking forward to reading what Robin puts forward, particularly because he says “I’m no technosolutionist.”

From a technical perspective, the web has never been better. We have incredible features in HTML, CSS, and JavaScript, all standardised and with amazing interoperability between browsers. The challenges that face the web today are not technical.

That’s one of the reasons why I have no patience for the web3 crowd. Apart from the ridiculous name, they’re focusing on exactly the wrong part of the stack.

Listening to their pitch, they’ll point out that while yes, the fundamental bedrock of the web is indeed decentralised—TCP/IP, HTTP(S)—what’s been constructed on that foundation is increasingly centralised; the power brokers of Google, Meta, Amazon.

And what’s the solution they propose? Replace the underlying infrastructure with something-something-blockchain.

Would that it were so simple.

The problems of today’s web are not technical in nature. The problems of today’s web won’t be solved by technology. If we’re going to solve the problems of today’s web, we’ll need to do it through law, culture, societal norms, and co-operation.

(Feel free to substitute “today’s web” with “tomorrow’s climate”.)

Tweaking navigation labelling

I’ve always liked the idea that your website can be your API. Like, you’ve already got URLs to identify resources, so why not make that URL structure predictable and those resources parsable?

That’s why the (read-only) API for The Session doesn’t live at a separate subdomain. It uses the same URL structure as the regular site, but you can request the resources in an alternative format: JSON, XML, RSS.

This works out pretty well, mostly because I put a lot of thought into the URL structure of the site. I’m something of a URL fetishist, but I think that taking a URL-first approach to information architecture can be a good exercise.

Most of the resources on The Session involve nouns like tunes, events, discussions, and so on. There’s a consistent and predictable structure to the URLs for those sections:

  • /things
  • /things/new
  • /things/search

And then an idividual item can be found at:

  • things/ID

That’s all nice and predictable and the naming of the URLs matches what you’d expect to find:

Tunes, events, discussions, sessions. Those are all fine. But there’s one section of the site that has this root URL:

/recordings

When I was coming up with the URL structure twenty years ago, it was clear what you’d find there: track listings for albums of music. No one would’ve expected to find actual recordings of music available to listen to on-demand. The bandwidth constraints and technical limitations of the time made that clear.

Two decades on, the situation has changed. Now someone new to the site might well expect to hit a link called “recordings” and expect to hear actual recordings of music.

So I should probably change the label on the link. I don’t think “albums” is quite right—what even is an album any more? The word “discography” is probably the most appropriate label.

Here’s my dilemma: if I update the label, should I also update the URL structure?

Right now, the section of the site with /tunes URLs is labelled “tunes”. The section of the site with /events URLs is labelled “events”. Currently the section of the site with /recordings URLs is labelled “recordings”, but may soon be labelled “discography”.

If you click on “tunes”, you end up at /tunes. But if you click on “discography”, you end up at /recordings.

Is that okay? Am I the only one that would be bothered by that?

I could update the URLs to match the labelling (with redirects for the old URLs, of course), but I’m not so keen on this URL structure:

  • /discography
  • /discography/new
  • /discography/search
  • /discography/ID

It doesn’t seem as tidy as:

  • /recordings
  • /recordings/new
  • /recordings/search
  • /recordings/ID

But if I don’t update the URLs to match the label, then I’m just going to have to live with the mismatch.

I’m just thinking out loud here. I think I should definitely update the label. I just won’t make any decision on changing URLs for a while yet.

69420

This is going to make me sound like an old man in his rocking chair on the front porch, but let me tell you about the early days of Twitter…

The first time I mentioned Twitter on here was back in November 2006:

I’ve been playing around with Twitter, a neat little service from the people who brought you Odeo. You send it little text updates via SMS, the website, or Jabber.

A few weeks later, I wrote about some of its emergent properties:

Overall, Twitter is full of trivial little messages that sometimes merge into a coherent conversation before disintegrating again. I like it. Instant messaging is too intrusive. Email takes too much effort. Twittering feels just right for the little things: where I am, what I’m doing, what I’m thinking.

That’s right; back then we didn’t have the verb “tweeting” yet.

In those early days, some of the now-ubiquitous interactions had yet to emerge. Chris hadn’t yet proposed hashtags. And if you wanted to address a message to a specific person—or reply to a tweet of theirs—the @ symbol hadn’t been repurposed for that. There were still few enough people on Twitter that you could just address someone by name and they’d probably see your message.

That’s what I was doing when I posted:

It takes years off you, Simon.

I’m assuming Simon Willison got a haircut or something.

In any case, it’s an innocuous and fairly pointless tweet. And yet, in the intervening years, that tweet has received many replies. Weirdly, most of the replies consisted of one word:

nice

Very puzzling.

Then a little while back, I realised what was happening. This is the URL for my tweet:

twitter.com/adactio/status/69420

69420.

69.

420.

Pesky kids with their stoner sexual-innuendo numerology!

02022-02-22

Eleven years ago, I made a prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

One year later, Matt called me on it and the prediction officially became a bet:

We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.

I’m very happy to lose this bet.

When I made the original prediction eleven years ago that a URL on the longbets.org site would no longer be available, I did so in a spirit of mischief—it was a deliberately meta move. But it was also informed by a genuine feeling of pessimism around the longevity of links on the web. While that pessimism was misplaced in this case, it was informed by data.

The lifetime of a URL on the web remains shockingly short. What I think has changed in the intervening years is that people may have become more accustomed to the situation. People used to say “once something is online it’s there forever!”, which infuriated me because the real problem is the exact opposite: if you put something online, you have to put in real effort to keep it online. After all, we don’t really buy domain names; we just rent them. And if you publish on somebody else’s domain, you’re at their mercy: Geocities, MySpace, Facebook, Medium, Twitter.

These days my view towards the longevity of online content has landed somewhere in the middle of the two dangers. There’s a kind of Murphy’s Law around data online: anything that you hope will stick around will probably disappear and anything that you hope will disappear will probably stick around.

One huge change in the last eleven years that I didn’t anticipate is the migration of websites to HTTPS. The original URL of the prediction used HTTP. I’m glad to see that original URL now redirects to a more secure protocol. Just like most of the World Wide Web. I think we can thank Let’s Encrypt for that. But I think we can also thank Edward Snowden. We are no longer as innocent as we were eleven years ago.

I think if I could tell my past self that most of the web would using HTTPS by 2022, my past self would be very surprised …’though not as surprised at discovering that time travel had also apparently been invented.

The Internet Archive has also been a game-changer for digital preservation. While it’s less than ideal that something isn’t reachable at its original URL, knowing that there’s probably a copy of the content at archive.org lessens the sting considerably. I couldn’t be happier that this fine institution is the recipient of the stakes of this bet.

The State of the Web — the links

An Event Apart Spring Summit is happening right now. I opened the show yesterday with a talk called The State Of The Web:

The World Wide Web has come a long way in its three decades of existence. There’s so much we can do now with HTML, CSS, and JavaScript: animation, layout, powerful APIs… we can even make websites that work offline! And yet the web isn’t exactly looking rosy right now. The problems we face aren’t technical in nature. We’re facing a crisis of expectations: we’ve convinced people that the web is slow, buggy, and inaccessible. But it doesn’t have to be this way. There is no fate but what we make. In this perspective-setting talk, we’ll go on a journey to the past, present, and future of web design and development. You’ll laugh, you’ll cry, and by the end, you’ll be ready to make the web better.

I wrote about preparing this talk and you can see the outline on Kinopio. I thought it turned out well, but I never actually know until people see it. So I’m very gratified and relieved that it went down very well indeed. Phew!

Eric and the gang at An Event Apart asked for a round-up of links related to this talk and I was more than happy to oblige. I’ve separated them into some of the same categories that the talk covers.

I know that these look like a completely disconnected grab-bag of concepts—you’d have to see the talk to get the connections. But even without context, these are some rabbit holes you can dive down…

Apollo 8

Hypertext

The World Wide Web

NASA

This (somewhat epic) slidedeck is done.

Good form

I got a text this morning at 9:40am. It was from the National Health Service, NHS. It said:

You are now eligible for your free NHS coronavirus vaccination. Please book online at https://www.nhs.uk/covid-vaccination or by calling 119. You will need to provide your name, date of birth and postcode. Your phone number has been obtained from your GP records.

Well, it looks like I timed turning fifty just right!

I typed that URL in on my laptop. It redirected to a somewhat longer URL. There’s a very clear call-to-action to “Book or manage your coronavirus vaccination.” On that page there’s very clear copy about who qualifies for vaccination. I clicked on the “Book my appointments” button.

From there, it’s a sequence of short forms, clearly labelled. Semantic accessible HTML, some CSS, and nothing more. If your browser doesn’t support JavaScript (or you’ve disabled it for privacy reasons), that won’t make any difference to your experience. This is the design system in action and it’s an absolute pleasure to experience.

I consider myself relatively tech-savvy so I’m probably not the best judge of the complexity of the booking system, but it certainly seemed to be as simple as possible (but no simpler). It feels like the principle of least power in action.

SMS to HTML (with a URL as the connective tissue between the two). And if those technologies aren’t available, there’s still a telephone number, and finally, a letter by post.

This experience reminded me of where the web really excels. It felt a bit like the web-driven outdoor dining I enjoyed last summer:

Telling people “You have to go to this website” …that seems reasonable. But telling people “You have to download this app” …that’s too much friction.

A native app would’ve been complete overkill. That may sound obvious, but it’s surprising how often the overkill option is the default.

Give me a URL—either by SMS or QR code or written down—and make sure that when I arrive at that URL, the barrier to entry is as low as possible.

Maybe I’ll never need to visit that URL again. In the case of the NHS, I hope I won’t need to visit again. I just need to get in, accomplish my task, and get out again. This is where the World Wide Web shines.

In five days time, I will get my first vaccine jab. I’m very thankful. Thank you to the NHS. Thank you to everyone who helped build the booking process. It’s beautiful.

Ten down, one to go

The Long Now Foundation is dedicated to long-term thinking. I’ve been a member for quite a few years now …which, in the grand scheme of things, is not very long at all.

One of their projects is Long Bets. It sets out to tackle the problem that “there’s no tax on bullshit.” Here’s how it works: you make a prediction about something that will (or won’t happen) by a particular date. So far, so typical thought leadery. But then someone else can challenge your prediction. And here’s the crucial bit: you’ve both got to place your monies where your mouths are.

Ten years ago, I made a prediction on the Long Bets website. It’s kind of meta:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

I made the prediction on February 22nd, 2011 when my mind was preoccupied with digital preservation.

One year later I was on stage in Wellington, New Zealand, giving a talk called Of Time And The Network. I mentioned my prediction in the talk and said:

If anybody would like to take me up on that bet, you can put your money down.

Matt was also speaking at Webstock. When he gave his talk, he officially accepted my challenge.

So now it’s a bet. We both put $500 into the pot. If I win, the Bletchly Park Trust gets that money. If Matt wins, the money goes to The Internet Archive.

As I said in my original prediction:

I would love to be proven wrong.

That was ten years ago today. There’s just one more year to go until the pleasingly alliterative date of 2022-02-22 …or as the Long Now Foundation would write it, 02022-02-22 (gotta avoid that Y10K bug).

It is looking more and more likely that I will lose this bet. This pleases me.

Design Principles For The Web—the links

I’m speaking today at an online edition of An Event Apart called Front-End Focus. I’ll be opening up the show with a talk called Design Principles For The Web, which ironically doesn’t have much of a front-end focus:

Designing and developing on the web can feel like a never-ending crusade against the unknown. Design principles are one way of unifying your team to better fight this battle. But as well as the design principles specific to your product or service, there are core principles underpinning the very fabric of the World Wide Web itself. Together, we’ll dive into applying these design principles to build websites that are resilient, performant, accessible, and beautiful.

That explains why I’ve been writing so much about design principles …well, that and the fact that I’m mildly obsessed with them.

To avoid technical difficulties, I’ve pre-recorded the talk. So while that’s playing, I’ll be spamming the accompanying chat window with related links. Then I’ll do a live Q&A.

Should you be interested in the links that I’ll be bombarding the attendees with, I’ve gathered them here in one place (and they’re also on the website of An Event Apart). The narrative structure of the talk might not be clear from scanning down a list of links, but there’s some good stuff here that you can dive into if you want to know what the inside of my head is like.

References

adactio.com

Wikipedia

BetrayURL

Back in February, I wrote about an excellent proposal by Jake for how browsers could display URLs in a safer way. Crucially, this involved highlighting the important part of the URL, but didn’t involve hiding any part. It’s a really elegant solution.

Turns out it was a Trojan horse. Chrome are now running an experiment where they will do the exact opposite: they will hide parts of the URL instead of highlighting the important part.

You can change this behaviour if you’re in the less than 1% of people who ever change default settings in browsers.

I’m really disappointed to see that Jake’s proposal isn’t going to be implemented. It was a much, much better solution.

No doubt I will hear rejoinders that the “solution” that Chrome is experimenting with is pretty similar to what Jake proposed. Nothing could be further from the truth. Jake’s solution empowered users with knowledge without taking anything away. What Chrome will be doing is the opposite of that, infantalising users and making decisions for them “for their own good.”

Seeing a complete URL is going to become a power-user feature, like View Source or user style sheets.

I’m really sad about that because, as Jake’s proposal demonstrates, it doesn’t have to be that way.

IncrementURL

Last month I wrote some musings on default browser behaviours. When it comes to all the tasks that browsers do for us, the most fundamental is taking a URL, fetching its contents and giving us the results. As part of that process, browsers also show us the URL of the page currently loaded in a tab or window.

But even at this fundamental level, there are some differences from browser to browser.

Safari only shows you the domain name—and any subdomain names—by default. It looks like nice and tidy, but it obfuscates what page you’re on (until you click on the domain name). This is bad.

Chrome shows you the full URL, nice and straightforward. This is neutral.

Firefox, like Chrome, shows you the full URL, but with a subtle difference. The important part of the URL—usually the domain name—is subtly highlighted in a darker shade of grey. This is good.

The reason I say that what it highlights is usually the domain name is because what it actually highlights is eTLD+1.

The what now?

Well, if you’re looking at a page on adactio.com, that’s the important bit. But what if you’re looking at a page on adactio.github.io? The domain name is important, but so is the subdomain.

It turns out there’s a list out there of which sites and top level domains allow registrations like this. This is the list that Firefox is using for its shading behaviour in displaying URLs.

Safari, by the way, does not use this list. These URLs are displayed identically in Safari, the phisherman’s friend:

  • example.com
  • example.github.io
  • github.example.com

Whereas Firefox displays them as:

  • example.com
  • example.github.io
  • github.example.com

I learned all this from Jake on a recent edition of HTTP 203. Nicolas Hoizey has writen a nice little summary.

Jake acknowledges that what Apple is doing is shi… , what Firefox is doing is good, and then puts forward an idea for what Chrome could do. (But please note that this is Jake’s personal opinion; not an official proposal from the Chrome team.)

There’s some prior art here. It used to be that, if your SSL certificate included extended validation, the name would be shown in green next to the padlock symbol. So while my website—which uses regular SSL from Let’s Encrypt—would just have a padlock, Medium—which uses EV SSL—would have a padlock and the text “A Medium Corporation”.

Extended validation wasn’t quite the bulletproof verification it was cracked up to be. So browsers don’t use that interface pattern any more.

Jake suggests repurposing this pattern for all URLs. Pull out the important bit—eTLD+1—and show it next to the padlock.

Screenshots of @JaffaTheCake’s idea for separating out the eTLD+1 part of a URL in a browser’s address bar. Screenshots of @JaffaTheCake’s idea for separating out the eTLD+1 part of a URL in a browser’s address bar.

I like this. The full URL is still displayed. This proposal is more of an incremental change. An enhancement that is applied progressively, if you will.

I also like that it builds on existing interface patterns—Firefox’s URL treatment and the deprecated treatment of EV certs. In fact, I think the first step for Chrome should be to match Firefox’s current behaviour, and then go further with something like Jake’s proposal.

This kind of gradual change was exactly what Chrome did with displaying https and http domains.

Chrome treatment for HTTPS pages.

Jake mentions this in the video

We’ve already seen that you have to take small steps here, like we did with the https change.

There’s a fascinating episode of the Freakonomics podcast called In Praise of Incrementalism. I’ve huffduffed it.

I’m a great believer in the HTML design principle, Evolution Not Revolution:

It is better to evolve an existing design rather than throwing it away.

I’d love to see Chrome take the first steps to Jake’s proposal by following Firefox’s lead.

Then again, I’d love it if Chrome followed Firefox’s lead in implementing subgrid.

Balance

This year’s Render conference just wrapped up in Oxford. It was a well-run, well-curated event, right up my alley: two days of a single track of design and development talks (see also: An Event Apart and Smashing Conference for other events in this mold that get it right).

One of my favourite talks was from Frances Ng. She gave a thoroughly entertaining account of her journey from aerospace engineer to front-end engineer, filled with ideas about how to get started, and keep from getting overwhelmed in the world of the web.

She recommended taking the time to occasionally dive deep into a foundational topic, pointing to another talk as a perfect example; Ana Balica gave a great presentation all about HTTP. The second half of the talk was about HTTP 2 and was filled with practical advice, but the first part was a thoroughly geeky history of the Hypertext Transfer Protocol, which I really loved.

While I’m mentoring Amber, we’ve been trying to find a good balance between those deep dives into the foundational topics and the hands-on day-to-day skills needed for web development. So far, I think we’ve found a good balance.

When Amber is ‘round at the Clearleft office, we sit down together and work on the practical aspects of HTML, CSS, and (soon) JavaScript. Last week, for example, we had a really great day diving into CSS selectors and specificity—I watched Amber’s knowledge skyrocket over the course of the day.

But between those visits—which happen every one or two weeks—I’ve been giving Amber homework of sorts. That’s where the foundational building blocks come in. Here are the questions I’ve asked so far:

  • What is the difference between the internet and the web?
  • What is the difference between GET and POST?
  • What are cookies?

The first question is a way of understanding the primacy of URLs on the web. Amber wrote about her research. The second question was getting at an understanding of HTTP. Amber wrote about that too. The third and current question is about state on the web. I’m looking forward to reading a write-up of that soon.

We’re still figuring out this whole mentorship thing but I think this balance of research and exercises is working out well.

Long betting

It has been exactly six years to the day since I instantiated this prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

It is exactly five years to the day until the prediction condition resolves to a Boolean true or false.

If it resolves to true, The Bletchly Park Trust will receive $1000.

If it resolves to false, The Internet Archive will receive $1000.

Much as I would like Bletchley Park to get the cash, I’m hoping to lose this bet. I don’t want my pessimism about URL longevity to be rewarded.

So, to recap, the bet was placed on

02011-02-22

It is currently

02017-02-22

And the bet times out on

02022-02-22.

Teaching in Porto, day four

Day one covered HTML (amongst other things), day two covered CSS, and day three covered JavaScript. Each one of those days involved a certain amount of hands-on coding, with the students getting their hands dirty with angle brackets, curly braces, and semi-colons.

Day four was a deliberate step away from all that. No more laptops, just paper. Whereas the previous days had focused on collaboratively working on a single document, today I wanted everyone to work on a separate site.

The sites were generated randomly. I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”

For a bit of fun, the first brainstorming exercise (run as a 6-up) was to come with potential names for this service—4 minutes for 6 ideas. Then we went around the table, shared the ideas, got feedback, and settled on the names.

Now I asked everyone to come up with a one-sentence mission statement for their newly-named service. This was a good way of teasing out the most important verbs and nouns, which led nicely into the next task: answering the question “what is the core functionality?”

If that sounds familiar, it’s because it’s the first part of the three-step process I outlined in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

We did some URL design, figuring out what structures would make sense for straightforward GET requests, like:

  • /things
  • /things/ID

Then, once it was clear what the primary “thing” was (a car, a book, etc.), I asked them to write down all the pieces that might appear on such a page; one post-it note per item e.g. “title”, “description”, “img”, “rating”, etc.

The next step involved prioritisation. They took those post-it notes and put them on the wall, but they had to put them in a vertical line from top to bottom in decreasing order of importance. This can be a challenge, but it’s better to solve these problems now rather than later.

Okay. I know asked them to “mark up” those vertical lists of post-it notes: writing HTML tag names by each one. By doing this before doing any visual design, it meant they were thinking about the meaning of the content first.

After that, we did a good ol’ fashioned classic 6-up sketching exercise, followed by critique (including a “designated dissenter” for each round). At this point, I was encouraging them to go crazy with ideas—they already had the core functionality figured out (with plain ol’ client/server requests and responses) so they could all the bells and whistles they wanted on top of that.

We finished up with a discussion of some of those bells and whistles, and how they could be used to improve the user experience: Ajax, geolocation, service workers, notifications, background sync …the sky’s the limit.

It was a whirlwind tour for just one day but I think it helped emphasise the importance of thinking about the fundamentals before adding enhancements.

This marked the end of the structured masterclass lessons. Tomorrow I’m around to answer any miscellaneous questions (if I can) and chat to the students individually while they work on their term projects.

Regression toward being mean

I highly recommend Remy’s State Of The Gap post—it’s ace. He summarises it like this:

I strongly believe in the concepts behind progressive web apps and even though native hacks (Flash, PhoneGap, etc) will always be ahead, the web, always gets there. Now, today, is an incredibly exciting time to be build on the web.

I agree completely. That might sound odd after I wrote about Regressive Web Apps, but it’s precisely because I’m so excited by the technologies behind progressive web apps that I think it’s vital that we do them justice. As Remy says:

Without HTTPS and without service workers, you can’t add to homescreen. This is an intentionally high bar of entry with damn good reasons.

When the user installs a PWA, it has to work. It’s our job as web developers to provide the most excellent experience for our users.

It has to work.

That’s why I don’t agree with Dion’s metrics for what makes a progressive web app:

If you deliver an experience that only works on mobile is that a PWA? Yes.

I think it’s important to keep quality control high. Being responsive is literally the first item in the list of qualities that help define what a progressive web app is. That’s why I wrote about “regressive” web apps: sites that are supposed to showcase what we can do but instead take a step backwards into the bad old days of separate sites for separate device classes: washingtonpost.com/pwa, m.flipkart.com, lite.5milesapp.com, app.babe.co.id, m.aliexpress.com.

A lot of people on Twitter misinterpreted my post as saying “the current crop of progressive web apps are missing the mark, therefore progressive web apps suck”. What I was hoping to get across was “the current crop of progressive web apps are missing the mark, so let’s make better ones!”

Now, I totally understand that many of these examples are a first stab, a way of testing the waters. I absolutely want to encourage these first attempts and push them further. But I don’t think that waiving the qualifications for progressive web apps helps achieves that. As much as I want to acknowledge the hard work that people have done to create those device-specific examples, I don’t think we should settle for anything less than high-quality progressive web apps that are as much about the web as they are about apps.

Simply put, in this instance, I don’t think good intentions are enough.

Which brings me to the second part of Regressive Web Apps, the bit about Chrome refusing to show the “add to home screen” prompt for sites that want to have their URL still visible when launched from the home screen.

Alex was upset by what I wrote:

if you think the URL is going to get killed on my watch then you aren’t paying any attention whatsoever.

so, your choices are to think that I have a secret plan to kill URLs, or conclude I’m still Team Web.

I’m galled that anyone, particularly you @adactio, would think the former…but contrarianism uber alles?

I am very, very sorry that I upset Alex like this.

But I stand by my criticism of the actions of the Chrome team. Because good intentions are not enough.

I know that Alex is huge fan of URLs, and of the web. Heck, just about everybody I know that works on Chrome in some capacity are working for the web first and foremost: Alex, Jake, various and sundry Pauls. But that doesn’t mean I’m going to stay quiet when I see the Chrome team do something I think is bad for the web. If anything, it’s precisely because I hold them to a high standard that I’m going to sound the alarm when I see what I consider to be missteps.

I think that good people can make bad decisions with the best of intentions. Usually it involves long-term thinking—something I think is very important. “The ends justify the means” is a way of thinking that can create a lot of immediate pain, even if it means a better future overall. Balancing those concerns is front and centre of the Chromium project:

As browser implementers, we find that there’s often tension between (a) moving the web forward and (b) preserving compatibility. On one hand, the web platform API surface must evolve to stay relevant. On the other hand, the web’s primary strength is its reach, which is largely a function of interoperability.

For example, when Alex talks of the Web Component era as though it were an inevitability, I get nervous. Not for myself, but for the millions of Opera Mini users out there. How do we get to a better future without leaving anyone behind? Or do we sacrifice those people for the greater good? Do the needs of the many outweigh the needs of the few? Do the ends justify the means?

Now, I know for a fact that the end-game that Alex is pursuing with web components—and the extensible web manifesto in general—is a more declarative web: solutions that first get tackled as web components end up landing in browsers. But to get there, the solutions are first created using modern JavaScript that simply doesn’t work everywhere. Is that the price we’re going to have to pay for a better web?

I hope not. I hope we can find ways to have our accessible cake and eat it too. But it will be really, really hard.

Returning to progressive web apps, I was genuinely shocked and appalled at the way that the Chrome team altered the criteria for the “add to home screen” prompt to discourage exposing URLs. I was also surprised at how badly the change was communicated—it was buried in a bug report that five people contributed to before pushing the change. I only found out about it through a conversation with Paul Kinlan. Paul encouraged me to give feedback, and that’s what I did on my website, just like Stuart did on his.

Of course the Chrome team are working on ways of exposing URLs within progressive web apps that are launched in from the home screen. Opera are working on it too. But it’s a really tricky problem to solve. It’s not enough to say “we’ll figure it out”. It’s not enough to say “trust us.”

I do trust the people I know working on Chrome. I also trust the people I know at Mozilla, Opera and Microsoft. That doesn’t mean I’m going to let their actions go unquestioned. Good intentions are not enough.

As Alex readily acknowledges, the harder problem (figuring out how to expose URLs) should have been solved first—then the change to the “add to home screen” metrics would be uncontentious. Putting the cart before the horse, discouraging display:browser now, while saying “trust us, we’ll figure it out”, is another example of saying the ends justify the means.

But the stakes are too high here to let this pass. Good intentions are not enough. Knowing that the people working on Chrome (or Firefox, or Opera, or Edge) are good people is not reason enough to passively accept every decision they make.

Alex called me out for not getting in touch with him directly about the Chrome team’s future plans with URLs, but again, that kind of rough consensus to do something is trumped by running code. Also, I did talk to Chrome people—this all came out of a discussion with Paul Kinlan. I don’t know who’s who in the company’s political hierarchy and I don’t think I should need an org chart to give feedback to Google (or Mozilla, or Opera, or Microsoft).

You’ll notice that I didn’t include Apple there. I don’t hold them to the same high standard. As it turns out, I know some very good people at Apple working on WebKit and Safari. As individuals, they care about the web. But as a company, Apple has shown indifference towards web developers. As Remy put it:

Even getting the hint of interest from Apple is a process of dumpster-diving the mailing lists scanning for the smallest hint of interest.

With that in mind, I completely understand Alex’s frustration with my post on “regressive” web apps. Although I intended it as a push towards making better progressive web apps, I can see how it could be taken as confirmation by those who think that progressive web apps aren’t worth investing in. Apple, for example. As it is, they’ll have to be carried kicking and screaming into adding support for Service Workers, manifest files, and other building blocks. From the reaction to my post from at least one WebKit developer on Twitter, not only did I fail to get across just how important the technologies behind progressive web apps are, I may have done more harm than good, giving ammunition to sceptics.

Still, I hope that most people took my words in the right spirit, like Addy:

We should push them to do much better. I’ll file bugs. Per @adactio post, can’t forget the ‘Progressive’ part of PWAs

Seeing that reaction makes me feel good …but seeing Alex’s reaction makes me feel bad. Very bad. I’m genuinely sorry that I made Alex feel that way. It wasn’t my intention but, well …good intentions are not enough.

I’ve been looking back at what I wrote, trying to see it through Alex’s eyes, looking for the parts that could be taken as a personal attack:

Chrome developers have decided that displaying URLs is not “best practice” … To declare that all users of all websites will be confused by seeing a URL is so presumptuous and arrogant that it beggars belief. … Withholding the “add to home screen” prompt like that has a whiff of blackmail about it. … This isn’t the first time that Chrome developers have made a move against the address bar. It’s starting to grind me down.

Some pretty strong words there. I stand by them, but the tone is definitely strident.

When we criticise something—a piece of software, a book, a website, a film, a piece of music—it’s all too easy to forget that there are real people behind it. But that isn’t the case here. I know that there are real people working on Chrome, because I know quite a few of those people. I also know that their intentions are good. That’s not a reason for me to remain silent—that’s a reason for me to speak up.

If I had known that my post was going to upset Alex, would I have still written it? That’s a tough one. On the one hand, this is a topic I care passionately about. I think it’s vital that we don’t compromise on the very things that make the web great. On the other hand, who knows if what I wrote will make the slightest bit of difference? In which case, I got the catharsis of getting it off my chest but at the price of upsetting somebody I respect. That price feels too high.

I love the fact that I can publish whatever I want on my own website. It can be a place for me to be enthusiastic about things that excite me, and a place for me to rant about things that upset me. I estimate that the enthusiastic stuff outnumbers the ranty stuff by about ten to one, but negativity casts a disproportionately large shadow.

I need to get better at tempering my words. Not that I’m going to stop criticising bad decisions when I see them, but I need to make my intentions clearer …because just having good intentions is not enough. Throughout this post, I’ve mentioned repeatedly how much I respect the people I know working on the Chrome team. I should have said that in my original post.

Regressive Web Apps

There were plenty of talks about building for the web at this year’s Google I/O event. That makes a nice change from previous years when the web barely got a look in and you’d be forgiven for thinking that Google I/O was an event for Android app developers.

This year’s event showed just how big Google is, and how it doesn’t have one party line when it comes to the web and native. At the same time as there were talks on Service Workers and performance for the web, there was also an unveiling of Android Instant Apps—a full-frontal assault on the web. If you thought it was annoying when websites door-slammed you with intrusive prompts to install their app, just wait until they don’t need to ask you anymore.

Peter has looked a bit closer at Android Instant Apps and I think he’s as puzzled as I am. Either they are sandboxed to have similar permission models to the web (in which case, why not just use the web?) or they allow more access to native APIs in which case they’re a security nightmare waiting to happen. I’m guessing it’s probably the former.

Meanwhile, a different part of Google is fighting the web’s corner. The buzzword du jour is Progressive Web Apps, originally defined by Alex as:

  • Responsive
  • Connectivity independent
  • App-like-interactions
  • Fresh
  • Safe
  • Discoverable
  • Re-engageable
  • Installable
  • Linkable

A lot of those points are shared by good native apps, but the first and last points in that list are key features of the web: being responsive and linkable.

Alas many of the current examples of so-called Progressive Web Apps are anything but. Flipkart and The Washington Post have made Progressive Web Apps that are getting lots of good press from Google, but are mobile-only.

Looking at most of the examples of Progressive Web Apps, there’s an even more worrying trend than the return to m-dot subdomains. It looks like most of them are concentrating so hard on the “app” part that they’re forgetting about the “web” bit. That means they’re assuming that modern JavaScript is available everywhere.

Alex pointed to shop.polymer-project.org as an example of a Progressive Web App that is responsive as well as being performant and resilient to network failures. It also requires JavaScript (specifically the Polymer polyfill for web components) to render some text and images in a browser. If you’re using the “wrong” browser—like, say, Opera Mini—you get nothing. That’s not progressive. That’s the opposite of progressive. The end result may feel very “app-like” if you’re using an approved browser, but throwing the users of other web browsers under the bus is the very antithesis of what makes the web great. What does it profit a website to gain app-like features if it loses its soul?

I’m getting very concerned that the success criterion for Progressive Web Apps is changing from “best practices on the web” to “feels like native.” That certainly seems to be how many of the current crop of Progressive Web Apps are approaching the architecture of their sites. I think that’s why the app-shell model is the one that so many people are settling on.

Personally, I’m not a fan of the app-shell model. I feel that it prioritises exactly the wrong stuff—the interface is rendered quickly while the content has to wait. It feels weirdly like a hangover from Appcache. I also notice it being used as a get-out-of-jail-free card, much like the ol’ “Single Page App” descriptor; “Ah, I can’t do progressive enhancement because I’m building an app shell/SPA, you see.”

But whatever. That’s just, like, my opinion, man. Other people can build their app-shelled SPAs and meanwhile I’m free to build websites that work everywhere, and still get to use all the great technologies that power Progressive Web Apps. That’s one of the reasons why I’ve been quite excited about them—all the technologies and methodologies they promote match perfectly with my progressive enhancement approach: responsive design, Service Workers, good performance, and all that good stuff.

I hope we’ll see more examples of Progressive Web Apps that don’t require JavaScript to render content, and don’t throw away responsiveness in favour of a return to device-specific silos. But I’m not holding my breath. People seem to be so caught up in the attempt to get native-like functionality that they’re willing to give up the very things that make the web great.

For example, I’ve seen people use a meta viewport declaration to disable pinch-zooming on their sites. As justification they point to the fact that you can’t pinch-zoom in most native apps, therefore this web-based app should also prohibit that action. The inability to pinch-zoom in native apps is a bug. By also removing that functionality from web products, people are reproducing unnecessary bugs. It feels like a cargo-cult approach to building for the web: slavishly copy whatever native is doing …because everyone knows that native apps are superior to websites, right?

Here’s another example of the cargo-cult imitation of native. In your manifest JSON file, you can declare a display property. You can set it to browser, standalone, or fullscreen. If you set it to standalone or fullscreen then, when the site is launched from the home screen, it won’t display the address bar. If you set the display property to browser, the address bar will be visible on launch. Now, personally I like to expose those kind of seams:

The idea of “seamlessness” as a desirable trait in what we design is one that bothers me. Technology has seams. By hiding those seams, we may think we are helping the end user, but we are also making a conscience choice to deceive them (or at least restrict what they can do).

Other people disagree. They think it makes more sense to hide the URL. They have a genuine concern that users will be confused by launching a website from the home screen in a browser (presumably because the user’s particular form of amnesia caused them to forget how that icon ended up on their home screen in the first place).

Fair enough. We’ll agree to differ. They can set their display property how they want, and I can set my display property how I want. It’s a big web after all. There’s no one right or wrong way to do this. That’s why there are multiple options for the values.

Or, at least, that was the situation until recently…

Remember when I wrote about how Chrome on Android will show an “add to home screen” prompt if your Progressive Web App fulfils a few criteria?

  • It is served over HTTPS,
  • it has a manifest JSON file,
  • it has a Service Worker, and
  • the user visits it a few times.

Well, those goalposts have moved. There is now a new criterion:

  • Your manifest file must not contain a display value of browser.

Chrome developers have decided that displaying URLs is not “best practice”. It was filed as a bug.

A bug.

Displaying URLs.

A bug.

I’m somewhat flabbergasted by this. The killer feature of the web—URLs—are being treated as something undesirable because they aren’t part of native apps. That’s not a failure of the web; that’s a failure of native apps.

Now, don’t get me wrong. I’m not saying that everyone should be setting their display property to browser. That would be far too prescriptive. I’m saying that it should be a choice. It should depend on the website. It should depend on the expectations of the users of that particular website. To declare that all users of all websites will be confused by seeing a URL is so presumptuous and arrogant that it beggars belief.

I wouldn’t even have noticed this change of policy if it weren’t for the newly-released Lighthouse tool for testing Progressive Web Apps. The Session gets a good score but under “Best Practices” there was a red mark against the site for having display: browser. Turns out that’s the official party line from Chrome.

Just to clarify: you can have a site that has literally no HTML or turns away entire classes of devices, yet officially follows “best practices” and gets rewarded with an “add to home screen” prompt. But if you have a blazingly fast responsive site that works offline, you get nothing simply because you don’t want to hide URLs from your users:

I want people to be able to copy URLs. I want people to be able to hack URLs. I’m not ashamed of my URLs …I’m downright proud.

Stuart argues that this is a paternal decision:

The app manifest declares properties of the app, but the display property isn’t about the app; it’s about how the app’s developer wants it to be shown. Do they want to proudly declare that this app is on the web and of the web? Then they’ll add the URL bar. Do they want to conceal that this is actually a web app in order to look more like “native” apps? Then they’ll hide the URL bar.

I think there’s something to that, but digging deeper, developers and designers don’t make decisions like that in isolation. They’re generally thinking about what’s best for users. So, yes, absolutely, different apps will have different display properties, but that shouldn’t be down to the belief system of the developer; it should be down to the needs of the users …the specific needs of the specific users of that specific app. For the Chrome team to come down on one side or the other and arbitrarily declare that one decision is “correct” for every single Progressive Web App that is ever going to be built …that’s a political decision. It kinda feels like an abuse of power to me. Withholding the “add to home screen” prompt like that has a whiff of blackmail about it.

The other factors that contribute to the “add to home screen” prompt are pretty uncontroversial:

  • Sites should be served over a secure connection: that’s pretty hard to argue with.
  • Sites should be resilient to network outages: I don’t think anyone is going to say that’s a bad idea.
  • Sites should provide some metadata in manifest file: okay, sure, it’s certainly not harmful.
  • Sites should obscure their URL …whoa! That feels like a very, very different requirement, one that imposes one particular opinion onto everyone who wants to participate.

This isn’t the first time that Chrome developers have made a move against the address bar. It’s starting to grind me down.

Up until now I’ve been a big fan of Progressive Web Apps. I understood them to be combining the best of the web (responsiveness, linkability) with the best of native (installable, connectivity independent). Now I see that balance shifting towards the native end of the scale at the expense of the web’s best features. I’d love to see that balance restored with a little less emphasis on the “Apps” and a little more emphasis on the “Web.” Now that would be progressive.

Webiness

John Gruber quite rightly skewers a paywall-proteced “sky is falling” piece in the Wall Street Journal called The Web Is Dying; Apps Are Killing It, writing that native apps are part of the web:

They’re just superior clients to open Internet services.

This is something I wrote about earlier this year:

There’s a whole category of native apps that could just as easily be described as “artisanal web browsers” (and if someone wants to write a browser extension that replaces every mention of “native app” with “artisanal web browser” that would be just peachy).

Instagram’s native app is a web browser.

Facebook’s native app is a web browser.

Twitter’s native app is a web browser.

In that same piece, I try to define exactly what the web is:

Well, the unsexy definition I’ve used in the past is that the web consists of files (e.g. HTML, CSS, JavaScript), accessible at URLs, delivered over HTTP.

John also gives a defintion of what the web is:

There are two big four-letter “H” acronyms that powered the web from the beginning: HTML (client), and HTTP (networking protocol). Native apps are just an alternative to HTML running in a web browser (and many native apps still use HTML web views embedded within the apps themselves to render parts of their interface). Almost all native apps use HTTP/S for networking, though.

Notice the difference? Whereas John talks about two things that define the web (HTTP/S and HTML), I talk about three: HTTP(S), HTML, and URLs:

But to be honest, I don’t think that the Hypertext Transfer Protocol is the important part of the web; it’s the URLs that really matter. It’s the addressability of the files that’s the killer app of the web in my opinion.

URLs are what give the web is its reach, and that’s what’s still missing from native apps.

But John’s fundamental point that native apps and the web are not fundumentally opposed? I completely agree with that. They are complementary. Irakli Nadareishvili wrote about this false dichotomy recently in a post called Responsive Web Design or Native Mobile Apps?:

Native mobile applications are not going anywhere and the future of all websites is to be responsive. These two assertions are not mutually exclusive, they are complementary – don’t create apps when what you actually need is a website; but also don’t pretend webapps can completely replace native applications, because they can’t.

It’s also worth remembering that even if you’re using a native app—like, say, Facebook or Twitter—you’re still going to spend a lot of time following links and reading stuff that’s rendered in the app, but that lives out on the world wide web. And the reason why those apps can access those resources is because those resources have URLs.

URLs are not an implementation detail. The URI is the thing.

Seams

You can listen to an audio version of Seams.

“The function of science fiction,” said Ray Bradbury, “is not only to predict the future, but to prevent it.”

Dystopias are the default setting for science fiction. It’s rare to find utopian sci-fi, and when you do—as in the post-singularity Culture novels of Iain M.Banks—there’s always more than a germ of dystopia; the dystutopias that Margaret Atwood speaks of.

You’ve got your political dystopias—1984 and all its imitators. Then there’s alien invasion dystopias, machine-intelligence dystopias, and a whole slew of post-apocalyptic dystopias: nuclear war, pandemic disease, environmental collapse, genetic engineering …take your pick. From the cosy catastrophes of John Wyndham to Cormac McCarthy’s The Road, this is the stock and trade of speculative fiction.

Of all these undesirable futures, one that troubles more than any other is the Wall·E dystopia. I’m not talking about the environmental wasteland depicted on Earth. I mean the dystutopia depicted aboard the generation starship The Axiom. Here, humanity’s every need is catered to without requiring any thought. And so humanity atrophies, becoming physically obese and intellectually lazy.

It’s not a new idea. H. G. Wells had already shown us a distant future like this in his classic novel The Time Machine. In the far future of that book’s timeline, humanity splits into two. The savagery of the canabalistic Morlocks is contrasted with the docile passive stupidity of the Eloi, but as Jaron Lanier points out, both endpoints are equally horrific.

In Wall·E, the Eloi have advanced technology. Their technology has been designed according to a design principle enshrined in the title of a Dead Kennedys album: Give Me Convenience Or Give Me Death.

That’s the reason why the Wall·E dystopia disturbs me so much. It’s all-too believable. For many years now, the rallying cry of digital designers has been epitomised by the title of Steve Krug’s terrific book, Don’t Make Me Think. But what happens when that rallying cry is taken too far? What happens when it stops being “don’t make think while I’m trying to complete a task” to simply “don’t make me think” full stop?

Convenience. Ease of use. Seamlessness.

On the face of it, these all seem like desirable traits in digital and physical products alike. But they come at a price. When we design, we try to do the work so that the user doesn’t have to. We do the thinking so the user doesn’t have to. Don’t make the user think. But taken too far, that mindset becomes dangerous.

Marshall McLuhan said that every extension is also an amputution. As we augment the abilities of people to accomplish their tasks, we should be careful not to needlessly curtail what they can do:

Here we are, a society hell bent on extending our reach through phones, through computers, through “seamless integration” and yet all along the way we’re unwittingly losing perhaps as much as we gain. The mediums we create are built to carry out specific tasks efficiently, but by doing so they have a tendency to restrict our options for accomplishing that task by other means. We begin to learn the “One” way to do it, when in fact there are infinite ways. The medium begins to restrict our thinking, our imagination, our potential.

The idea of “seamlessness” as a desirable trait in what we design is one that bothers me. Technology has seams. By hiding those seams, we may think we are helping the end user, but we are also making a conscience choice to deceive them (or at least restrict what they can do).

I see this a lot in the world of web devlopment. We’re constantly faced with challenges like dealing with users on slow networks or small screens. So we try to come up with solutions (bandwidth media queries, responsive images) that have at their heart an assumption that we know better than the end user what they should get.

I’m not saying that everything should be an option in a menu for the user to figure out—picking smart defaults is very much part of our job. But I do think there’s real value in giving the user the final choice.

I remember Jake giving a good example of this. If he’s travelling and he’s on a 3G network on his phone, or using shitty hotel WiFi on his laptop, and someone sends him a link to a video of some cats, he doesn’t mind if he gets the low-quality version as long as he gets to see the feline shenanigans in short order. But if he’s in the same situation and someone sends him a link to the just-released trailer for the new Star Trek movie, he’s willing to wait for hours so that he can watch in high-definition.

That’s a choice. All too often, these kind of choices are pre-made by designers and developers instead of being offered to the end user. We probably mean well, but there’s a real danger in assuming that just because someone is using a particular device that we can infer what their context is:

Mind reading is no way to base fundamental content decisions.

My point is that while we don’t want to overwhelm the user with choice overload, we also need to be careful not to unintentionally remove valuable choices that can empower people. In our quest to make experiences seamless, we run the risk of also making those experiences rigid and inflexible.

The drive for a “seamless experience” has been used to justify some harsh amputations. When Twitter declared war on the very developers it used to champion, and changed its API and terms of service so that tweets had to be displayed the same way everywhere, it was done in the name of “a consistent user experience.” Twitter knows best.

The web is made up of parts and there are seams between those parts: HTML, HTTP, and URLs. The software that can expose or hide those seams is the web browser. Web browsers are made by human beings and it’s the mindset and assumptions of those human beings that determines whether web browsers are enabling or disabling users to make use of those seams.

“View source” is a seam that exposes the HTML lying beneath every web page. That kind of X-ray vision can be quite powerful. Clearly it’s not an important feature for most users, but it is directly responsible for showing people how web pages are made …and intimating that anyone can do it. In the introduction to my first book I thanked “view source” along with my other teachers like Jeff Veen, Steve Champeon, and Jeffrey Zeldman.

These days, browsers don’t like to expose “view source” as easily as they once did. It’s hidden amongst the developer tools. There’s an assumption there that it’s not intended for regular users. The browser makers know best.

There are seams between the technologies that make up a web page: HTML, CSS, and JavaScript. The ability to enable or disable those layers can be empowering. It has become harder and harder to disable JavaScript in the browser. Another little amputation. The browser makers know best.

The CSS that styles web pages can be over-ridden by the end user. This is not a bug. It is a very powerful feature. That feature is being removed:

I understand that vendors can do whatever they want to control how you experience the web, because it is their software, their product, but removing user stylesheets feels sooo un-web to me, which is irony. A browser’s largest responsibility is to give people access to the web. It’s like the web is this open hand, but software is this closed fist.

Then there’s the URL. The ultimate seam.

Historically, browsers have exposed this seam, but now—just as with “view source” and user stylesheets—the visibility of the URL is being relegated to being a power-user tool.

The ultimate amputation.

The irony here is that the justification for this change is not the usual mantra of providing “a more seamless user experience.” Instead, the justification is supposedly security.

This strike me as really strange. Security is the one area where seamlessness is definitely not a desirable characteristic. A secure system requires people to be mindful and aware of their situation. This is certainly true on the web, as Tom points out:

Hiding information away makes me less able to make decisions: it makes me a less informed user.

The whole reason that phishing is a problem is because users don’t pay any bloody attention to what they see in their location bar. Putting less information in the location bar makes the location bar less useful and thus there’s less point paying any attention to it.

Tom has hit on the fundamental mismatch here. Chrome is a piece of software that wants to provide a good user experience—“don’t make me think!”—while at the same trying to make users mindful of their surroundings:

Security requires educated, pro-active, informed thinking users.

Usability is about making the whole process of using the web seamless and thoughtless: a child should be able to do it.

So from the security standpoint, obfuscating the URL is exactly the wrong thing to do.

In order to actually stay safe online, you need to see the “seams” of the web, you need to pay attention, use your brain.

Chrome knows best.

Making it harder to “view source” might seem like an inconsequentail decision. Removing the ability to apply user stylesheets might seem like an inconsequential decision. Heck, even hiding the URL might seem like an inconsequential decision. But each one of those decisions has repercussions. And each one of those decisions reflects an underlying viewpoint.

Make no mistake, all software is political. We talk about opinionated software but really, all software is opinionated, whether we like it or not. Seemingly inconsequential interface decisions are actually reflections of assumptions, biases and beliefs.

As Nat points out, like all political decisions, this is about power:

There’s been much debate about whether the URLs are ‘ugly’ or ‘beautiful’ and whether people really understand them. This debate misses the point.

The URLs are the cornerstone of the interconnected, decentralised web. Removing the URLs from the browser is an attempt to expand and consolidate centralised power.

If that’s the case, then it really doesn’t matter what we think about Chrome removing visible URLs. What appears to be a design decision about the user interface is in fact a manifestation of a much deeper vision. It’s a vision of a future where people can have everything their heart desires without having to expend needless thought. It’s a bright future filled with seamless experiences.

Welcome aboard The Axiom.

Buy n Large knows best.

URLy warning

I’m genuinely shocked that Jake thinks that Chrome hiding URLs is a good thing. On the one hand, he says:

The URL is the share button of the web, and it does that better than any other platform. Linkability and shareability is key to the web, we must never lose that…

I absolutely agree with him there. But I very much disagree when he says:

…and these changes do not lose that.

The method he describes for getting at a URL to share is this:

clicking the origin chip or hitting ⌘-L.

Your average user is no more likely to figure out how to do that then they are to figure out how to view source (something that Chrome buried as a “developer” feature some time ago).

Cennydd recently said of URLs:

I mostly agree with him. The protocol portion of the URL is pretty pointless, and the domain name and TLD are never what I would describe as “beautiful”. No, when I talk about beautiful URLs, I mean the path that comes after the protocol, domain name, and TLD gumpf …the very bit that Chrome is looking to hide.

URLs are universal. They work in Firefox, Chrome, Safari, Internet Explorer, cURL, wget, your iPhone, Android and even written down on sticky notes. They are the one universal syntax of the web. Don’t take that for granted.

URLs are for humans. Design them for humans.

Of course your average user probably won’t even know what a URL is, and nor should they. But they know what a link is. They know that, until now, they could copy the “link” from the top of their browser and paste it into an email, or a text message, or a word processing document.

If this Chrome experiment goes forward, we can kiss all that goodbye.

The security issue that Jake outlines is that browsers need to make the domain name portion of the URL clearly visible. I hope that the smart folks working on Chrome can figure out a way to do that without castrating the browser’s ability to easily share links.

It’s a classic case of:

  1. Something must be done!
  2. This (killing URLs) is something.
  3. Something has been done.

Technically, obfuscating the URL seems to solve the security issue. But technically, decapitation seems to solve a headache.

Scrollin’, scrollin’, scrollin’

A few weeks ago, when I changed how the front page of this site works, I wrote about “streams”, infinite scrolling, and the back button:

Anyway, you’ll notice that the new home page of adactio.com is still using pagination. That’s related to another issue and I suspect that this is the same reason that we haven’t seen search engines like Google introduce stream-like behaviour instead of pagination for search results: what happens when you’ve left a stream but you use the browser’s back button to return to it?

In all likelihood you won’t be returned to the same spot in the stream that were in before. Instead you’re more likely to be dumped back at the default list view (the first ten or twenty items).

That’s exactly what Kyle Kneath is trying to solve with this nifty experiment with infinite scroll and the HTML5 History API. I should investigate this further. Although, like I said in my post, I would probably replace automatic infinite scrolling with an explicit user-initiated action:

Interpreting one action by a user—scrolling down the screen—as implicit permission to carry out another action—load more items—is a dangerous assumption.

Kyle’s code is available from GitHub (of course). As written, it relies on some library support—like jQuery—but with a little bit of tweaking, I’m sure it could be rewritten to remove any dependencies (hint, hint, lazy web).

URIslands in the stream

For over a decade the home page of this website has effectively been a splash screen.

Well …not exactly. I mean, it’s not as if it contained an animation and a “skip intro” link. But it has been very minimalist: a brief one-sentence explanation of what this website is and a brief one-sentence description of the latest post I’ve published.

Most blogs are standalone sites—i.e. the blog isn’t part of a larger site—so the home page and the latest blog posts are one and the same. My site is a bit different. The blog part—my journal—is just one piece. There’s also the links section and the articles section. That raises an interesting question: exactly what should the homepage contain?

I’m a great believer in well-designed URLs but oftentimes the home page is something of an exception (one of the reasons I advocate designing the home page last). The URL / doesn’t tell you anything about the resource. There’s basically two different ways to go: keep it really, really minimal (like I’ve been doing for ten years) or make it a patchwork containing a little bit of everything (the way that most news websites work).

For the past few months I’ve been contemplating making the switch from the minimalist to the maximalist approach for the home page of this site. On the one hand, I think it’s RESTfully correct to have the URL adactio.com/ return a terse description of what the site is, with links to further resources. On the other hand …it’s a splash screen.

After much deliberation, I’ve decided to flip the switch.

It’s a bit of a shame. I quite enjoyed being one of the last people I know to have a quirky intro page. But the new home page alleviates a concern I’ve had for a while. I get the feeling that a lot of people have only been paying attention to what’s written in my journal without realising that I post multiple times a day to the links section. The new homepage shows everything—journal entries, links, and articles—grouped by date. It is, if you like, a stream.

It’s exactly the kind of stream that Anil Dash writes about in Stop Publishing Web Pages:

Most users on the web spend most of their time in apps. The most popular of those apps, like Facebook, Twitter, Gmail, Tumblr and others, are primarily focused on a single, simple stream that offers a river of news which users can easily scroll through, skim over, and click on to read in more depth.

Glossing over the lack of definition for “app”, there’s a good point here. The “stream” idea makes a lot of sense …in the right situation. That situation is the list view. If you think about the situations where the never-ending stream has been employed—Twitter, Facebook, Instagram, Pinterest—they are views of lists, usually reverse-chronologically ordered lists.

Going back to URL design, these kinds of list views are the ones that often have a single URL filtered with a query string. You’re much more likely to see /newest?page=2 or /popular?start=10 than you are to see /newest/page2 or /popular/10-20. There’s a good reason for that. While the kind of resource located at the URL remains unchanged—a list of items—the specifics are likely to change every day or hour or even minute—which items are in the list.

So traditionally list views have been paginated using query strings. The streams that Anil is talking about are an alternate way of navigating list views that does away with pagination and query strings. I think that this way of navigating a list view can work well but, as always, the devil is in the details.

First of all, there’s the issue of when to append to the stream. This could be triggered by the user with a link—“load more”—or you could assume that when the user gets to the current end of the list that they’ll automatically want to load more …the dreaded infinite scroll.

As Frank put it:

Quite apart from any psychological implications, there’s a usability issue here. Interpreting one action by a user—scrolling down the screen—as implicit permission to carry out another action—load more items—is a dangerous assumption. It’s similar to the tyranny of mouseover:

If I click on a link, I am initiating an action. If I fill in a form and press a submit button, I am initiating an action. But if I move my mouse over a page element, I am not initiating an action.

Oh, and if your site has footer, please do not use infinite scroll. Think about it.

In case you hadn’t guessed, I’m a big proponent of allowing the user to explicitly request more items to be appended to a stream rather than using the infinite-scroll pattern.

That said, you could introduce a nice compromise. What if, when the user scrolls down the screen, you begin pre-fetching the next items in the list? That way, when the user explicitly requests more items they’ll load lickety-split.

Anyway, you’ll notice that the new home page of adactio.com is still using pagination. That’s related to another issue and I suspect that this is the same reason that we haven’t seen search engines like Google introduce stream-like behaviour instead of pagination for search results: what happens when you’ve left a stream but you use the browser’s back button to return to it?

In all likelihood you won’t be returned to the same spot in the stream that were in before. Instead you’re more likely to be dumped back at the default list view (the first ten or twenty items).

If your stream is self-contained—like Instagram or Pinterest—then there’s no problem. Twitter attempts to get around the problem by showing you the linked content inline where possible (with some judicious use of oEmbed) and by opening external links in a new window or tab—not so much the cure that kills the patient as the cure that ignores the problem.

In my case, my home page stream is crammed full of hyperlinks. Until there’s a way to make unpaginated streams work nicely with the back button, I’ll stick with pagination.

Anyway, I hope you like the new home page. If you do, you may want to subscribe to the RSS equivalent of the combined stream of my journal, my links, and my articles.