Journal tags: isomorphic

1

Be progressive

Aaron wrote a great post a little while back called A Fundamental Disconnect. In it, he points to a worldview amongst many modern web developers, who see JavaScript as a universally-available technology in web browsers. They are, in effect, viewing a browser’s JavaScript engine as a runtime environment, and treating web development no different to any other kind of software development.

The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.

Treating JavaScript support in “the browser” as a known quantity is as much of a consensual hallucination as deciding that all viewports are 960 pixels wide. Even that phrasing—“the browser”—shows a framing that’s at odds with the reality of developing for the web; we don’t have to think about “the browser”, we have to think about browsers:

Lakoffian self-correction: if I’m about to talk about doing something “in the browser”, I try to catch myself and say “in browsers” instead.

While we might like think that browsers have all reached a certain level of equilibrium, as Aaron puts it “the Web is messy”:

And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.

Please don’t think that either Aaron or I are saying that you shouldn’t use JavaScript. Far from it! It’s simply a matter of how you wield the power of JavaScript. If you make your core tasks dependent on JavaScript, some of your potential users will inevitably be left out in the cold. But if you start by building on a classic server/client model, and then enhance with JavaScript, you can have your cake and eat it too. Modern browsers get a smooth, rich experience. Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.

Aaron makes the case that, while we cannot control which browsers people will use, we can control the server environment.

Stuart takes issue with that assertion in a post called Fundamentally Disconnected. In it, he points out that the server isn’t quite the controlled environment that Aaron claims:

Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue.

It’s true enough that the server isn’t some rock-solid never-changing environment. Anyone who’s ever had to do install patches or update programming languages knows this. But at least it’s one single environment …whereas the web has an overwhelming multitude of environments; one for every browser/OS/device combination.

Stuart finishes on a stirring note:

The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed.

However he wraps up by saying that…

…the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.

In a post called Missed Connections, Aaron pushes back against that last point:

The fact is that you can’t build a robust Web experience that relies solely on client-side JavaScript.

While JavaScript may technically be available and consistently-implemented across most devices used to access our sites nowadays, we do not control how, when, or even if that JavaScript is ultimately executed.

Stuart responds in a post called Reconnecting (and, by the way, how great is it to see this kind of thoughtful blog-to-blog discussion going on?).

I am, in general and in total agreement with Aaron, opposed to the idea that without JavaScript a web app doesn’t work.

But here’s the problem with progressively enhancing from server functionality to a rich client:

A web app which does not require its client-side scripting, which works on the server and then is progressively enhanced, does not work in an offline environment.

Good point.

Now, at this juncture, I could point out that—by using progressive enhancement—you can still have the best of both worlds. Stuart has anticpated that:

It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.

Ah, there’s the rub!

When I’ve extolled the virtues of progressive enhancement in the past, the pushback I most often receive is on this point. Surely it’s wasteful to build something that works on the server and then reimplement much of it on the client?

Personally, I try not to completely reinvent all the business logic that I’ve already figured out on the server, and then rewrite it all in JavaScript. I prefer to use JavaScript—and specifically Ajax—as a dumb waiter, shuffling data back and forth between the client and server, where the real complexity lies.

I also think that building in this way will take longer …at first. But then on the next project, it takes less time. And on the project after that, it takes less time again. From that perspective, it’s similar to switching from tables for layout to using CSS, or switching from building fixed-with sites to responsive design: the initial learning curve is steep, but then it gets easier over time, until it simply becomes normal.

But fundamentally, Stuart is right. Developers don’t like to violate the DRY principle: Don’t Repeat Yourself. Writing code for the server environment, and then writing very similar code for the browser—I mean browsers—is a bad code smell.

Here’s the harsh truth: building websites with progressive enhancement is not convenient.

Building a client-side web thang that requires JavaScript to work is convenient, especially if you’re using a framework like Angular or Ember. In fact, that’s the main selling point of those frameworks: developer convenience.

The trade-off is that to get that level of developer convenience, you have to sacrifice the universal reach that the web provides, and limit your audience to the browsers that can run a pre-determined level of JavaScript. Many developers are quite willing to make that trade-off.

Developer convenience is a very powerful and important force. I wish that progressive enhancement could provide the same level of developer convenience offered by Angular and Ember, but right now, it doesn’t. Instead, its benefits are focused on the end user, often at the expense of the developer.

Personally, I’m willing to take that hit. I’ve always maintained that, given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time. But I absolutely understand the mindset of developers who choose otherwise.

But perhaps there’s a way to cut this Gordian knot. What if you didn’t need to write your code twice? What if you could write code for the server and then run the very same code on the client?

This is the promise of isomorphic JavaScript. It’s a terrible name for a great idea.

For me, this is the most exciting aspect of Node.js:

With Node.js, a fast, stable server-side JavaScript runtime, we can now make this dream a reality. By creating the appropriate abstractions, we can write our application logic such that it runs on both the server and the client — the definition of isomorphic JavaScript.

Some big players are looking into this idea. It’s the thinking behind AirBnB’s Rendr.

Interestingly, the reason why many large sites are investigating this approach isn’t about universal access—quite often they have separate siloed sites for different device classes. Instead it’s about performance. The problem with having all of your functionality wrapped up in JavaScript on the client is that, until all of that JavaScript has loaded, the user gets absolutely nothing. Compare that to rendering an HTML document sent from the server, and the perceived performance difference is very noticable.

Here’s the ideal situation:

  1. A browser requests a URL.
  2. The server sends HTML, which renders quickly, along with with some mustard-cutting JavaScript.
  3. If the browser doesn’t cut the mustard, or JavaScript fails, fall back to full page refreshes.
  4. If the browser does cut the mustard, keep all the interaction in the client, just like a single page app.

With Node.js on the server, and JavaScript in the client, steps 3 and 4 could theoretically use the same code.

So why aren’t we seeing more of these holy-grail apps that achieve progressive enhancement without code duplication?

Well, partly it’s back to that question of controlling the server environment.

This is something that Nicholas Zakas tackled a year ago when he wrote about Node.js and the new web front-end. He proposes a third layer that sits between the business logic and the rendered output. By applying the idea of isomorphic JavaScript, this interface layer could be run on the server (as Node.js) or on the client (as JavaScript), while still allowing you to have the rest of your server environment running whatever programming language works for you.

It’s still early days for this kind of thinking, and there are lots of stumbling blocks—trying to write JavaScript that can be executed on both the server and the client isn’t so easy. But I’m pretty excited about where this could lead. I love the idea of building in a way that provide the performance and universal access of progressive enhancement, while also providing the developer convenience of JavaScript frameworks.

In the meantime, building with progressive enhancement may have to involve a certain level of inconvenience and duplication of effort. It’s a price I’m willing to pay, but I wish I didn’t have to. And I totally understand that others aren’t willing to pay that price.

But while the mood might currently seem to be in favour of using monolithic JavaScript frameworks to build client-side apps that rely on JavaScript in browsers, I think that the tide might change if we started to see poster children for progressive enhancement.

Three years ago, when I was trying to convince clients and fellow developers that responsive design was the way to go, it was a hard sell. It reminded me of trying to sell the benefits of using web standards instead of using tables for layout. Then, just as the Doug’s redesign of Wired and Mike’s redesign of ESPN helped sell the idea of CSS for layout, the Filament Group’s work on the Boston Globe made it a lot easier to sell the idea of responsive design. Then Paravel designed a responsive Microsoft homepage and the floodgates opened.

Now …who wants to do the same thing for progressive enhancement?