Journal tags: complexity

7

Multi-page web apps

I received this email recently:

Subject: multi-page web apps

Hi Jeremy,

lately I’ve been following you through videos and texts and I’m curious as to why you advocate the use of multi-page web apps and not single-page ones.

Perhaps you can refer me to some sources where your position and reasoning is evident?

Here’s the response I sent…

Hi,

You can find a lot of my reasoning laid out in this (short and free) online book I wrote called Resilient Web Design:

https://resilientwebdesign.com/

The short answer to your question is this: user experience.

The slightly longer answer…

For most use cases, a website (or multi-page app if you prefer) is going to provide the most robust experience for the most number of users. That’s because a user’s web browser takes care of most of the heavy lifting.

Navigating from one page to another? That’s taken care of with links.

Gathering information from a user to process on a server? That’s taken care of with forms.

This frees me up to concentrate on the content and the design without having to reinvent the wheels of links and form fields.

These (let’s call them) multi-page apps are stateless, and for most use cases that’s absolutely fine.

There are some cases where you’d want a state to persist across pages. Let’s say you’re playing a song, or a podcast episode. Ideally you’d want that player to continue seamlessly playing even as the user navigates around the site. In that situation, a single-page app would be a suitable architecture.

But that architecture comes at a cost. Now you’ve got stop the browser doing what it would normally do with links and forms. It’s up to you to recreate that functionality. And you can’t do it with HTML, a robust fault-tolerant declarative language. You need to reimplement all that functionality in JavaScript, a less tolerant, more brittle language.

Then you’ve got to ship all that code to the user before they can use your site. It might be JavaScript code you’ve written yourself or it might be a third-party library designed for building single-page apps. Either way, the user pays a download tax (and a parsing tax, and an execution tax). Whereas with links and forms, all of that functionality is pre-bundled into the user’s web browser.

So that’s my reasoning. At least nine times out of ten, a multi-page approach is leaner, more robust, and simpler.

Like I said, there are times when a single-page approach makes sense—it all comes down to whether state needs to be constantly preserved. But these use cases are the exceptions, not the rule.

That’s why I find the framing of your question a little concerning. It should be inverted. The default approach should be to assume a multi-page approach (which is the way the web works by default). Deciding to take a JavaScript-driven single-page approach should be the exception.

It’s kind of like when people ask, “Why don’t you have children?” Surely the decision to have a child should require deliberation and commitment, rather than the other way around.

When it comes to front-end development, I’m worried that we’ve reached a state where the more complex over-engineered approach is viewed as the default.

I may be committing a fundamental attribution error here, but I think that we’ve reached this point not because of any consideration for users, but rather because of how it makes us developers feel. Perhaps building an old-fashioned website that uses HTML for navigations feels too easy, like it’s beneath us. But building an “app” that requires JavaScript just to render text on a screen feels like real programming.

I hope I’m wrong. I hope that other developers will start to consider user experience first and foremost when making architectural decisions.

Anyway. That’s my answer. User experience.

Cheers,

Jeremy

Principles and the English language

I work with words. Sometimes they’re my words. Sometimes they’re words that my colleagues have written:

One of my roles at Clearleft is “content buddy.” If anyone is writing a talk, or a blog post, or a proposal and they want an extra pair of eyes on it, I’m there to help.

I also work with web technologies, usually front-of-the-front-end stuff. HTML, CSS, and JavaScript. The technologies that users experience directly in web browsers.

I think a lot about design principles for the web. The two principles I keep coming back to are the robustness principle and the principle of least power.

When it comes to words, the guide that I return to again and again is George Orwell, specifically his short essay, Politics and the English Language.

Towards the end, he offers some rules for writing.

  1. Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
  2. Never use a long word where a short one will do.
  3. If it is possible to cut a word out, always cut it out.
  4. Never use the passive where you can use the active.
  5. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
  6. Break any of these rules sooner than say anything outright barbarous.

These look a lot like design principles. Not only that, but some of them look like specific design principles. Take the robustness principle:

Be conservative in what you send, be liberal in what you accept.

That first part applies to Orwell’s third rule:

If it is possible to cut a word out, always cut it out.

Be conservative in what words you send.

Then there’s the principle of least power:

Choose the least powerful language suitable for a given purpose.

Compare that to Orwell’s second rule:

Never use a long word where a short one will do.

That could be rephrased as:

Choose the shortest word suitable for a given purpose.

Or, going in the other direction, the principle of least power could be rephrased in Orwell’s terms as:

Never use a powerful language where a simple language will do.

Oh, I like that! I like that a lot.

Dev perception

Chris put together a terrific round-up of posts recently called Simple & Boring. It links off to a number of great articles on the topic of complexity (and simplicity) in web development.

I had linked to quite a few of the articles myself already, but one I hadn’t seen was from David DeSandro who wrote New tech gets chatter:

You don’t hear about TextMate because TextMate is old. What would I tweet? Still using TextMate. Still good.

I think that’s a very good point.

It’s relatively easy to write and speak about new technologies. You’re excited about them, and there’s probably an eager audience who can learn from what you have to say.

It’s trickier to write something insightful about a tried and trusted (perhaps even boring) technology that’s been around for a while. You could maybe write little tips and tricks, but I bet your inner critic would tell you that nobody’s interested in hearing about that old tech. It’s boring.

The result is that what’s being written about is not a reflection of what’s being widely used. And that’s okay …as long as you know that’s the case. But I worry that theres’s a perception problem. Because of the outsize weighting of new and exciting technologies, a typical developer could feel that their skills are out of date and the technologies they’re using are passé …even if those technologies are actually in wide use.

I don’t know about you, but I constantly feel like I’m behind the curve because I’m not currently using TypeScript or GraphQL or React. Those are all interesting technologies, to be sure, but the time to pick any of them up is when they solve a specific problem I’m having. Learning a new technology just to mitigate a fear of missing out isn’t a scalable strategy. It’s reasonable to investigate a technology because you genuinely think it’s exciting; it’s quite another matter to feel like you must investigate a technology in order to survive. That way lies burn-out.

I find it very grounding to talk to Drew and Rachel about the people using their Perch CMS product. These are working developers, but they are far removed from the world of tools and frameworks forged in the startup world.

In a recent (excellent) article comparing the performance of Formula One websites, Jake made this observation at the end:

However, none of the teams used any of the big modern frameworks. They’re mostly Wordpress & Drupal, with a lot of jQuery. It makes me feel like I’ve been in a bubble in terms of the technologies that make up the bulk of the web.

I think this is very astute. I also think it’s completely understandable to form ideas about what matters to developers by looking at what’s being discussed on Twitter, what’s being starred on Github, what’s being spoken about at conferences, and what’s being written about on Ev’s blog. But it worries me when I see browser devrel teams focusing their efforts on what appears to be the needs of typical developers based on the amount of ink spilled and breath expelled.

I have a suspicion that there’s a silent majority of developers who are working with “boring” technologies on “boring” products in “boring” industries …you know, healthcare, government, education, and other facets of everyday life that any other industry would value more highly than Uber for dogs.

Trys wrote a great blog post called City life, where he compares his experience of doing CMS-driven agency work with his experience working at a startup in Shoreditch:

I was chatting to one of the team about my previous role. “I built two websites a month in WordPress”.

They laughed… “WordPress! Who uses that anymore?!”

Nearly a third of the web as it turns out - but maybe not on the Silicon Roundabout.

I’m not necessarily suggesting that there should be more articles and talks about older, more established technologies. Conferences in particular are supposed to give audiences a taste of what’s coming—they can be a great way of quickly finding out what’s exciting in the world of development. But we shouldn’t feel bad if those topics don’t match our day-to-day reality.

Ultimately what matters is building something—a website, a web app, whatever—that best serves end users. If that requires a new and exciting technology, that’s great. But if it requires an old and boring technology, that’s also great. What matters here is appropriateness.

When we’re evaluating technologies for appropriateness, I hope that we will do so through the lens of what’s best for users, not what we feel compelled to use based on a gnawing sense of irrelevancy driven by the perceived popularity of newer technologies.

Robustness and least power

There’s a great article by Steven Garrity over on A List Apart called Design with Difficult Data. It runs through the advantages of using unusual content to stress-test interfaces, referencing Postel’s Law, AKA the robustness principle:

Be conservative in what you send, be liberal in what you accept.

Even though the robustness principle was formulated for packet-switching, I see it at work in all sorts of disciplines, including design. A good example is in best practices for designing forms:

Every field you ask users to fill out requires some effort. The more effort is needed to fill out a form, the less likely users will complete the form. That’s why the foundational rule of form design is shorter is better — get rid of all inessential fields.

In other words, be conservative in the number of form fields you send to users. But then, when it comes to users filling in those fields:

It’s very common for a few variations of an answer to a question to be possible; for example, when a form asks users to provide information about their state, and a user responds by typing their state’s abbreviation instead of the full name (for example, CA instead of California). The form should accept both formats, and it’s the developer job to convert the data into a consistent format.

In other words, be liberal in what you accept from users.

I find the robustness principle to be an immensely powerful way of figuring out how to approach many design problems. When it comes to figuring out what specific tools or technologies to use, there’s an equally useful principle: the rule of least power:

Choose the least powerful language suitable for a given purpose.

On the face of it, this sounds counter-intuitive; why forego a powerful technology in favour of something less powerful?

Well, power comes with a price. Powerful technologies tend to be more complex, which means they can be trickier to use and trickier to swap out later.

Take the front-end stack, for example: HTML, CSS, and JavaScript. HTML and CSS are declarative, so you don’t get as much precise control as you get with an imperative language like JavaScript. But JavaScript comes with a steeper learning curve and a stricter error-handling model than HTML or CSS.

As a general rule, it’s always worth asking if you can accomplish something with a less powerful technology:

In the web front-end stack — HTML, CSS, JS, and ARIA — if you can solve a problem with a simpler solution lower in the stack, you should. It’s less fragile, more foolproof, and just works.

  • Instead of using JavaScript to do animation, see if you can do it in CSS instead.
  • Instead of using JavaScript to do simple client-side form validation, try to use HTML input types and attributes like required.
  • Instead of using ARIA to give a certain role value to a div or span, try to use a more suitable HTML element instead.

It sounds a lot like the KISS principle: Keep It Simple, Stupid. But whereas the KISS principle can be applied within a specific technology—like keeping your CSS manageable—the rule of least power is all about evaluating technology; choosing the most appropriate technology for the task at hand.

There are some associated principles, like YAGNI: You Ain’t Gonna Need It. That helps you avoid picking a technology that’s too powerful for your current needs, but which might be suitable in the future: premature optimisation. Or, as Rachel put it, stop solving problems you don’t yet have:

So make sure every bit of code added to your project is there for a reason you can explain, not just because it is part of some standard toolkit or boilerplate.

There’s no shortage of principles, laws, and rules out there, and I find many of them very useful, but if I had to pick just two that are particularly applicable to my work, they would be the robustness principle and the rule of least of power.

After all, if they’re good enough for Tim Berners-Lee…

CSS

Last month I went to CSS Day in Amsterdam, as an attendee this year, not a speaker. It was an excellent conference comprising the titular CSS day and a Browser API Special the day before.

By the end of CSS Day, my brain was full. Experiencing the depth of knowledge that’s contained in CSS now made me appreciate how powerful a language it is. I mean, the basics of CSS—selectors, properties, and values—can be grasped in a day. But you can spend a lifetime trying to master the details. Heck, you could spend a lifetime trying to master just one part of CSS, like layout, or text. And there would always be more to learn.

Unlike a programming language that requires knowledge of loops, variables, and other concepts, CSS is pretty easy to pick up. Maybe it’s because of this that it has gained the reputation of being simple. It is simple in the sense of “not complex”, but that doesn’t mean it’s easy. Mistaking “simple” for “easy” will only lead to heartache.

I think that’s what’s happened with some programmers coming to CSS for the first time. They’ve heard it’s simple, so they assume it’s easy. But then when they try to use it, it doesn’t work. It must be the fault of the language, because they know that they are smart, and this is supposed to be easy. So they blame the language. They say it’s broken. And so they try to “fix” it by making it conform to a more programmatic way of thinking.

I can’t help but think that they would be less frustrated if they would accept that CSS is not easy. Simple, yes, but not easy. Using CSS at scale has a learning curve, just like any powerful technology. The way to deal with that is not to hammer the technology into a different shape, but to get to know it, understand it, and respect it.

100 words 051

I’ve been thinking about what Harry said recently about logic in CSS. I think he makes an astute observation.

You can think of each part of a selector as a condition:

condition { }

That translates to code like:

if (condition) { }

So if you have a CSS selector like this:

condition1 condition2 condition3 { }

…it translates to code like this:

if (condition1) {
    if (condition2) {
        if (condition3) {
        }
    }
}

That doesn’t feel very elegant, even in its simpler form:

if (condition1 && condition2 && condition3) { }

I like Harry’s rule of thumb:

Think of your selectors as mini programs: Every time you nest or qualify, you are adding an if statement; read these ifs out loud to yourself to try and keep your selectors sane.

The complexity of HTML

Baldur Bjarnason has written down his thoughts on why he thinks HTML is too complex.

Now we’re back to seeing almost the same level of complexity and messiness in most web pages as we saw in the worst days of table-hacking. The semantic elements from HTML5 are largely unused.

The reason for this, as outlined in an old email by Matthew Thomas, is down to the lack of any visible benefit from browsers:

unless there is an immediate visual or behavioural benefit to using an element, most people will ignore it.

That’s a fair point, but I think it works in the opposite direction too. I’ve seen plenty of designers and developers who are reluctant to use HTML elements because of the default browser styling. The legend element is one example. A more recent one is input type=”date”—until there’s a way for authors to have more say about the generated interface, there’s going to be resistance to its usage.

Anyway, Baldur’s complaint is that HTML is increasing in complexity by adding new elements—section, article, etc.—that provide theoretical semantic value, without providing immediate visible benefits. It’s much the same argument that informed Cory Doctorow’s Metacrap essay over a decade ago. It’s a strong argument.

There’s always a bit of a chicken-and-egg situation with any kind of language extension designed to provide data structure: why should authors use the new extensions if there’s no benefit? …and why should parsers (like search engines or browsers) bother doing anything with the new extensions if nobody’s using them?

Whether it’s microformats or new HTML structural elements, the vicious circle remains the same. The solution is to try to make the barrier to entry as low as possible for authors—the parsers/spiders/browsers will follow.

I have to say, I share some of Baldur’s concern. I’ve talked before about the confusion between section and article. Providing two new elements might seem better than providing just one, but in fact it just muddies the waters and confuses authors (in my experience).

But I realise that in the grand scheme of things, I’m nitpicking. I think HTML is in pretty good shape. Baldur said “simply put, HTML is a mess,” and he’s not wrong …but HTML has always been a mess. It’s the worst mess except for all the others that have been tried. When it comes to markup, I think that “perfect” is very much the enemy of “good” (just look at XHTML2).

When I was in San Francisco back in August, I had a good ol’ chat with Tantek about markup complexity. It started when he asked me what I thought of hgroup.

I actually found quite a few use cases for hgroup in my own work …but I could certainly see why it was a dodgy solution. The way that a wrapping hgroup could change the semantic value of an h2, or h3, or whatever …that’s pretty weird, right?

And then Tantek pointed out that there are a number of HTML elements that depend on their wrapper for meaning …and that’s a level of complexity that doesn’t sit well with HTML.

For example, a p is always a paragraph, and em is always emphasis …but li is only a list item if it’s wrapped in ul or ol, and tr is only a table row if it’s wrapped in a table.

(Interestingly, these are the very same elements that browsers will automatically adjust the DOM for—generating ul and table if the author doesn’t include it. It’s like the complexity is damage that the browsers have to route around.)

I had never thought of that before, but the idea has really stuck with me. Now it smells bad to me that it isn’t valid to have a figcaption unless it’s within a figure.

These context-dependent elements increase the learning curve of HTML, and that, in my own opinion, is not a good thing. I like to think that HTML should be easy to learn—and that the web would not have been a success if its lingua franca hadn’t been grokabble by mere mortals.

Tantek half-joked about writing HTML: The Good Parts. The more I think about it, the more I think it’s a pretty good idea. If nothing else, it could make us more sensitive to any proposed extensions to HTML that would increase its complexity without a corresponding increase in value.