Journal tags: developer

11

Backup

I’m standing on a huge stage in a giant hangar-like room already filled with at least a thousand people. More are arriving. I’m due to start speaking in a few minutes. But there’s a problem with my laptop. It connects to the external screen, then disconnects, then connects, then disconnects. The technicians are on the stage with me, quickly swapping out adaptors and cables as they try to figure out a fix.

This is a pretty standard stress dream for me. Except this wasn’t a dream. This was happening for real at the giant We Are Developers World Congress in Berlin last week.

In the run-up to the event, the organisers had sent out emails about providing my slide deck ahead of time so it could go on a shared machine. I understand why this makes life easier for the people running the event, but it can be a red flag for speakers. It’s never quite the same as presenting from your own laptop with its familiar layout of the presentation display in Keynote.

Fortunately the organisers also said that I could present from my own laptop if I wanted to so that’s what I opted for.

One week before the talk in Berlin I was in Amsterdam for CSS Day. During a break between talks I was catching up with Michelle. We ended up swapping conference horror stories around technical issues (prompted by some of our fellow speakers having issues with Keynote on the brand new M1 laptops).

Michelle told me about a situation where she was supposed to be presenting from her own laptop, but because of last-minute technical issues, all the talks were being transferred to a single computer via USB sticks.

“But the fonts!” I said. “Yes”, Michelle responded. Even though she had put the fonts on the USB stick, things got muddled in the rush. If you open the Keynote file before installing the fonts, Keynote will perform font substitution and then it’s too late. This is exactly what happened with Michelle’s code examples, messing them up.

“You know”, I said, “I was thinking about having a back-up version of my talks that’s made entirely out of images—export every slide as an image, then make a new deck by importing all those images.”

“I’ve done that”, said Michelle. “But there isn’t a quick way to do it.”

I was still thinking about our conversation when I was on the Eurostar train back to England. I had plenty of time to kill with spotty internet connectivity. And that huge Berlin event was less than a week away.

I opened up the Keynote file of the Berlin presentation. I selected File, Export to, Images.

Then I created a new blank deck ready for the painstaking work that Michelle had warned me about. I figured I’d have to drag in each image individually. The presentation had 89 slides.

But I thought it was worth trying a shortcut first. I selected all of the images in Finder. Then I dragged them over to the far left column in Keynote, the one that shows the thumbnails of all the slides.

It worked!

Each image was now its own slide. I selected all 89 slides and applied my standard transition: a one second dissolve.

That was pretty much it. I now had a version of my talk that had no fonts whatsoever.

If you’re going to try this, it works best if don’t have too many transitions within slides. Like, let’s say you’ve got three words that you introduce—by clicking—one by one. You could have one slide with all three words, each one with its own build effect. But the other option is to have three slides: each one like the previous slide but with one more word added. If you use that second technique, then the exporting and importing will work smoothly.

Oh, and if you have lots and lots of notes, you’ll have to manually copy them over. My notes tend to be fairly minimal—a few prompts and the occasional time check (notes that say “5 minutes” or “10 minutes” so I can guage how my pacing is going).

Back to that stage in Berlin. The clock is ticking. My laptop is misbehaving.

One of the other speakers who will be on later in the day was hoping to test his laptop too. It’s Håkon. His presentation includes in-browser demos that won’t work on a shared machine. But he doesn’t get a chance to test his laptop just yet—my little emergency has taken precedent.

“Luckily”, I tell him, “I’ve got a backup of my presentation that’s just images to avoid any font issues.” He points out the irony: we spent years battling against the practice of text-as-images on the web and now here we are using that technique once again.

My laptop continues to misbehave. It connects, it disconnects, connects, disconnects. We’re going to have to run the presentation from the house machine. I’m handed a USB stick. I put my images-only version of the talk on there. I’m handed a clicker (I can’t use my own clicker with the house machine). I’m quickly ushered backstage while the MC announces my talk, a few minutes behind schedule.

It works. It feels a little strange not being able to look at my own laptop, but the on-stage monitors have the presentation display including my notes. The unfamiliar clicker feels awkward but hopefully nobody notices. I deliver my talk and it seems to go over well.

I think I’ll be making image-only versions of all my talks from now on. Hopefully I won’t ever need them, but just knowing that the backup is there is reassuring.

Mind you, if you’re the kind of person who likes to fiddle with your slides right up until the moment of presenting, then this technique won’t be very useful for you. But for me, not being able to fiddle with my slides after a certain point is a feature, not a bug.

User agents

I was on the podcast A Question Of Code recently. It was fun! The podcast is aimed at people who are making a career change into web development, so it’s right up my alley.

I sometimes get asked about what a new starter should learn. On the podcast, I mentioned a post I wrote a while back with links to some great resources and tutorials. As I said then:

For web development, start with HTML, then CSS, then JavaScript (and don’t move on to JavaScript too quickly—really get to grips with HTML and CSS first).

That’s assuming you want to be a good well-rounded web developer. But it might be that you need to get a job as quickly as possible. In that case, my advice would be very different. I would advise you to learn React.

Believe me, I take no pleasure in giving that advice. But given the reality of what recruiters are looking for, knowing React is going to increase your chances of getting a job (something that’s reflected in the curricula of coding schools). And it’s always possible to work backwards from React to the more fundamental web technologies of HTML, CSS, and JavaScript. I hope.

Regardless of your initial route, what’s the next step? How do you go from starting out in web development to being a top-notch web developer?

I don’t consider myself to be a top-notch web developer (far from it), but I am very fortunate in that I’ve had the opportunity to work alongside some tippety-top-notch developers at ClearleftTrys, Cassie, Danielle, Mark, Graham, Charlotte, Andy, and Natalie.

They—and other top-notch developers I’m fortunate to know—have something in common. They prioritise users. Sure, they’ll all have their favourite technologies and specialised areas, but they don’t lose sight of who they’re building for.

When you think about it, there’s quite a power imbalance between users and developers on the web. Users can—ideally—choose which web browser to use, and maybe make some preference changes if they know where to look, but that’s about it. Developers dictate everything else—the technology that a website will use, the sheer amount of code shipped over the network to the user, whether the site will be built in a fragile or a resilient way. Users are dependent on developers, but developers don’t always act in the best interests of users. It’s a classic example of the principal-agent problem:

The principal–agent problem, in political science and economics (also known as agency dilemma or the agency problem) occurs when one person or entity (the “agent”), is able to make decisions and/or take actions on behalf of, or that impact, another person or entity: the “principal”. This dilemma exists in circumstances where agents are motivated to act in their own best interests, which are contrary to those of their principals, and is an example of moral hazard.

A top-notch developer never forgets that they are an agent, and that the user is the principal.

But is it realistic to expect web developers to be so focused on user needs? After all, there’s a whole separate field of user experience design that specialises in this focus. It hardly seems practical to suggest that a top-notch developer needs to first become a good UX designer. There’s already plenty to focus on when it comes to just the technology side of front-end development.

So maybe this is too simplistic a way of defining the principle-agent relationship between users and developers:

user :: developer

There’s something that sits in between, mediating that relationship. It’s a piece of software that in the world of web standards is even referred to as a “user agent”: the web browser.

user :: web browser :: developer

So if making the leap to understanding users seems too much of a stretch, there’s an intermediate step. Get to know how web browsers work. As a web developer, if you know what web browsers “like” and “dislike”, you’re well on the way to making great user experiences. If you understand the pain points for browser when they’re parsing and rendering your code, you’ve got a pretty good proxy for understanding the pain points that your users are experiencing.

Priorities

I recently wrote about a web-specific example of a general principle for choosing the right tool for the job:

JavaScript should only do what only JavaScript can do.

I was—yet again—talking about appropriateness. Use the right technology for the task at hand. Here’s the example I gave:

It feels “wrong” when a powerful client-side JavaScript framework is applied to something that could be accomplished using HTML. Making a blog that’s a single page app is over-engineering.

Surprisingly, I got some pushback on this. Šime Vidas wrote:

Based on my experience, this is not necessarily the case.

Going from server-side rendering and progressive enhancement via JS to a single-page app powered by a JS framework was a enormous reduction in complexity for me (so the opposite of over-engineering).

(Emphasis mine.) He goes on to say:

My main concerns are ease of use & maintainability. If you get those things right, you’re good and it’s not over-engineering.

There’s no doubt that maintainability is a desirable goal. And ease of use for the developer is also important …but I think they pale in comparison to ease of use for the end user.

To be fair, the specific use-case I mentioned was making a blog. And a blog is a personal thing. You can do whatever the heck you like on your own website and don’t let anyone tell you otherwise.

So I probably chose a poor example to illustrate my point. I was thinking more about when you’re making websites for a living. You’re being paid money to make something available on the web. In that situation, I strongly believe that user needs should win out over developer convenience.

I wrote about this recently:

As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time.

That’s why I responded to Šime, saying:

Your main concern should be user needs—not your own.

When I talk about over-engineering, I’m speaking from the perspective of end users, not developers.

Before considering your ease of use, and maintainability, consider users first.

In fairness to Šime, he’s being very open and honest about his priorities. I admire that. I’ve seen too many developers try to provide user experience justifications for decisions made for developer convenience. Once again I recommend Alex’s excellent article, The “Developer Experience” Bait-and-Switch:

The swap is executed by implying that by making things better for developers, users will eventually benefit equivalently. The unstated agreement is that developers share all of the same goals with the same intensity as end users and even managers. This is not true.

Now I worry I wasn’t specific enough when I talked about choosing appropriate technology:

Appropriateness is something I keep coming back to when it comes to evaluating web technologies. I don’t think there are good tools and bad tools; just tools that are appropriate or inapropriate for the task at hand.

I should have made it clear that I was talking about what is appropriate or inapropriate for users. I think I made the mistake of assuming that this was obvious, and didn’t need saying. I’ll try not to make that mistake again.

There’s a whole group of tools where this point isn’t even relevant—build tools, task runners, version control …anything that never directly touches the end user; use whatever works for you. But if you’re making decisions around HTML, ARIA, CSS, or JavaScript, then appropriateness for the end user should take precedence.

If you’re in that situation—you are being paid money to make websites, and you are making technology decisions—I urge you to remember Charlie’s words: it isn’t about you.

Split

When I talk about evaluating technology for front-end development, I like to draw a distinction between two categories of technology.

On the one hand, you’ve got the raw materials of the web: HTML, CSS, and JavaScript. This is what users will ultimately interact with.

On the other hand, you’ve got all the tools and technologies that help you produce the HTML, CSS, and JavaScript: pre-processors, post-processors, transpilers, bundlers, and other build tools.

Personally, I’m much more interested and excited by the materials than I am by the tools. But I think it’s right and proper that other developers are excited by the tools. A good balance of both is probably the healthiest mix.

I’m never sure what to call these two categories. Maybe the materials are the “external” technologies, because they’re what users will interact with. Whereas all the other technologies—that mosty live on a developer’s machine—are the “internal” technologies.

Another nice phrase is something I heard during Chris’s talk at An Event Apart in Seattle, when he quoted Brad, who talked about the front of the front end and the back of the front end.

I’m definitely more of a front-of-the-front-end kind of developer. I have opinions on the quality of the materials that get served up to users; the output should be accessible and performant. But I don’t particularly care about the tools that produced those materials on the back of the front end. Use whatever works for you (or whatever works for your team).

As a user-centred developer, my priority is doing what’s best for end users. That’s not to say I don’t value developer convenience. I do. But I prioritise user needs over developer needs. And in any case, those two needs don’t even come into conflict most of the time. Like I said, from a user’s point of view, it’s irrelevant what text editor or version control system you use.

Now, you could make the argument that anything that is good for developer convenience is automatically good for user experience because faster, more efficient development should result in better output. While that’s true in theory, I highly recommend Alex’s post, The “Developer Experience” Bait-and-Switch.

Where it gets interesting is when a technology that’s designed for developer convenience is made out of the very materials being delivered to users. For example, a CSS framework like Bootstrap is made of CSS. That’s different to a tool like Sass which outputs CSS. Whether or not a developer chooses to use Sass is irrelevant to the user—the final output will be CSS either way. But if a developer chooses to use a CSS framework, that decision has a direct impact on the user experience. The user must download the framework in order for the developer to get the benefit.

So whereas Sass sits at the back of the front end—where I don’t care what you use—Bootstrap sits at the front of the front end. For tools like that, I don’t think saying “use whatever works for you” is good enough. It’s got to be weighed against the cost to the user.

Historically, it’s been a similar story with JavaScript libraries. They’re written in JavaScript, and so they’re going to be executed in the browser. If a developer wanted to use jQuery to make their life easier, the user paid the price in downloading the jQuery library.

But I’ve noticed a welcome change with some of the bigger JavaScript frameworks. Whereas the initial messaging around frameworks like React touted the benefits of state management and the virtual DOM, I feel like that’s not as prevalent now. You’re much more likely to hear people—quite rightly—talk about the benefits of modularity and componentisation. If you combine that with the rise of Node—which means that JavaScript is no longer confined to the browser—then these frameworks can move from the front of the front end to the back of the front end.

We’ve certainly seen that at Clearleft. We’ve worked on multiple React projects, but in every case, the output was server-rendered. Developers get the benefit of working with a tool that helps them. Users don’t pay the price.

For me, this question of whether a framework will be used on the client side or the server side is crucial.

Let me tell you about a Clearleft project that sticks in my mind. We were working with a big international client on a product that was going to be rolled out to students and teachers in developing countries. This was right up my alley! We did plenty of research into network conditions and typical device usage. That then informed a tight performance budget. Every design decision—from web fonts to images—was informed by that performance budget. We were producing lean, mean markup, CSS, and JavaScript. But we weren’t the ones implementing the final site. That was being done by the client’s offshore software team, and they insisted on using React. “That’s okay”, I thought. “React can be used server-side so we can still output just what’s needed, right?” Alas, no. These developers did everything client side. When the final site launched, the log-in screen alone required megabytes of JavaScript just to render a form. It was, in my opinion, entirely unfit for purpose. It still pains me when I think about it.

That was a few years ago. I think that these days it has become a lot easier to make the decision to use a framework on the back of the front end. Like I said, that’s certainly been the case on recent Clearleft projects that involved React or Vue.

It surprises me, then, when I see the question of server rendering or client rendering treated almost like an implementation detail. It might be an implementation detail from a developer’s perspective, but it’s a key decision for the user experience. The performance cost of putting your entire tech stack into the browser can be enormous.

Alex Sanders from the development team at The Guardian published a post recently called Revisiting the rendering tier . In it, he describes how they’re moving to React. Now, if this were a move to client-rendered React, that would make a big impact on the user experience. The thing is, I couldn’t tell from the article whether React was going to be used in the browser or on the server. The article talks about “rendering”—which is something that browsers do—and “the DOM”—which is something that only exists in browsers.

So I asked. It turns out that this plan is very much about generating HTML and CSS on the server before sending it to the browser. Excellent!

With that question answered, I’m cool with whatever they choose to use. In this case, they’re choosing to use CSS-in-JS (although, to be pedantic, there’s no C anymore so technically it’s SS-in-JS). As long as the “JS” part is JavaScript on a server, then it makes no difference to the end user, and therefore no difference to me. Not my circus, not my monkeys. For users, the end result is the same whether styling is applied via a selector in an external stylesheet or, for example, via an inline style declaration (and in some situations, a server-rendered CSS-in-JS solution might be better for performance). And so, as a user-centred developer, this is something that I don’t need to care about.

Except…

I have misgivings. But just to be clear, these misgivings have nothing to do with users. My misgivings are entirely to do with another group of people: the people who make websites.

There’s a second-order effect. By making React—or even JavaScript in general—a requirement for styling something on a web page, the barrier to entry is raised.

At least, I think that the barrier to entry is raised. I completely acknowledge that this is a subjective judgement. In fact, the reason why a team might decide to make JavaScript a requirement for participation might well be because they believe it makes it easier for people to participate. Let me explain…

It wasn’t that long ago that devs coming from a Computer Science background were deriding CSS for its simplicity, complaining that “it’s broken” and turning their noses up at it. That rhetoric, thankfully, is waning. Nowadays they’re far more likely to acknowledge that CSS might be simple, but it isn’t easy. Concepts like the cascade and specificity are real head-scratchers, and any prior knowledge from imperative programming languages won’t help you in this declarative world—all your hard-won experience and know-how isn’t fungible. Instead, it seems as though all this cascading and specificity is butchering the modularity of your nicely isolated components.

It’s no surprise that programmers with this kind of background would treat CSS as damage and find ways to route around it. The many flavours of CSS-in-JS are testament to this. From a programmer’s point of view, this solution has made things easier. Best of all, as long as it’s being done on the server, there’s no penalty for end users. But now the price is paid in the diversity of your team. In order to participate, a Computer Science programming mindset is now pretty much a requirement. For someone coming from a more declarative background—with really good HTML and CSS skills—everything suddenly seems needlessly complex. And as Tantek observed:

Complexity reinforces privilege.

The result is a form of gatekeeping. I don’t think it’s intentional. I don’t think it’s malicious. It’s being done with the best of intentions, in pursuit of efficiency and productivity. But these code decisions are reflected in hiring practices that exclude people with different but equally valuable skills and perspectives.

Rachel describes HTML, CSS and our vanishing industry entry points:

If we make it so that you have to understand programming to even start, then we take something open and enabling, and place it back in the hands of those who are already privileged.

I think there’s a comparison here with toxic masculinity. Toxic masculinity is obviously terrible for women, but it’s also really shitty for men in the way it stigmatises any male behaviour that doesn’t fit its worldview. Likewise, if the only people your team is interested in hiring are traditional programmers, then those programmers are going to resent having to spend their time dealing with semantic markup, accessibility, styling, and other disciplines that they never trained in. Heydon correctly identifies this as reluctant gatekeeping:

By assuming the role of the Full Stack Developer (which is, in practice, a computer scientist who also writes HTML and CSS), one takes responsibility for all the code, in spite of its radical variance in syntax and purpose, and becomes the gatekeeper of at least some kinds of code one simply doesn’t care about writing well.

This hurts everyone. It’s bad for your team. It’s even worse for the wider development community.

Last year, I was asked “Is there a fear or professional challenge that keeps you up at night?” I responded:

My greatest fear for the web is that it becomes the domain of an elite priesthood of developers. I firmly believe that, as Tim Berners-Lee put it, “this is for everyone.” And I don’t just mean it’s for everyone to use—I believe it’s for everyone to make as well. That’s why I get very worried by anything that raises the barrier to entry to web design and web development.

I’ve described a number of dichotomies here:

  • Materials vs. tools,
  • Front of the front end vs. back of the front end,
  • User experience vs. developer experience,
  • Client-side rendering vs. server-side rendering,
  • Declarative languages vs. imperative languages.

But the split that worries the most is this:

  • The people who make the web vs. the people who are excluded from making the web.

Writing for hiring

Cassie joined Clearleft as a junior front-end developer last year. It’s really wonderful having her around. It’s a win-win situation: she’s enthusiastic and eager to learn; I’m keen to help her skill up in any way I can. And it’s working out great for the company—she has already demonstrated that she can produce quality HTML and CSS.

I’m very happy about Cassie’s success, not just on a personal level, but also from a business perspective. Hiring people into junior roles—when you’ve got the time and ability to train them—is an excellent policy. Hiring Charlotte back in 2014 was Clearleft’s first foray into hiring for a junior front-end dev position and it was a huge success. Cassie is demonstrating that it wasn’t just a fluke.

Alas, we can’t only hire junior developers. We’ve got a lot of work in the pipeline right now and we’re going to need a full-time seasoned developer who can hit the ground running. That’s why Clearleft is recruiting for a senior front-end developer.

As lead developer, Danielle will make the hiring decision, but because she’s so busy on project work right now—hence the need to hire more people—I’m trying to help her out any way I can. I offered to write the job description.

Seeing as I couldn’t just write “A clone of Danielle, please”, I had to think about what makes for a great front-end developer who uses their experience wisely. But I didn’t want to create a list of requirements, and I certainly didn’t want to create a list of specific technologies.

My first instinct was to look at other job ads and take my cue from them. But, let’s face it, most job ads are badly written, and prone to turning into laundry lists. So I decided to just write like I normally would. You know, like a human.

Here’s what I wrote. I hope it’s okay. I don’t really have much to compare it to, other than what I don’t want it to be.

Have a read of it and see what you think. And if you’re an experienced front-end developer who’d like to work by the seaside, you should apply for the role.

Needs must

I got a follow-up comment to my follow-up post about the follow-up comment I got on my original post about Google Analytics. Keep up.

I made the point that, from a front-end performance perspective, server logs have no impact whereas a JavaScript-based analytics solution must have some impact on the end user. Paul Anthony says:

Google won the analytics war because dropping one line of JS in the footer and handing a tried and tested interface to customers is an obvious no brainer in comparison to setting up an open source option that needs a cron job to parse the files, a database to store the results and doesn’t provide mobile interface.

Good point. Dropping one snippet of JavaScript into your front-end codebase is certainly an easier solution …easier for you, that is. The cost is passed on to your users. This is a classic example of where user needs and developer needs are in opposition. I’ve said it before and I’ll say it again:

Given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.

It’s true that this often means doing more work. That’s why it’s called work. This is literally what our jobs are supposed to entail: we put in the work to make life easier for users. We’re supposed to be saving them time, not passing it along.

The example of Google Analytics is pretty extreme, I’ll grant you. The cost to the user of adding that snippet of JavaScript—if you’ve configured things reasonably well—is pretty small (again, just from a performance perspective; there’s still the cost of allowing Google to track them across domains), and the cost to you of setting up a comparable analytics system based on server logs can indeed be disproportionately high. But this tension between user needs and developer needs is something I see play out again and again.

I’ve often thought the HTML design principle called the priority of constituencies could be adopted by web developers:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors.

In Resilient Web Design, I documented the three-step approach I take when I’m building anything on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Now I’m wondering if I should’ve clarified that second step further. When I talk about choosing “the simplest possible technology”, what I mean is “the simplest possible technology for the user”, not “the simplest possible technology for the developer.”

For example, suppose I were going to build a news website. The core functionality is fairly easy to identify: providing the news. Next comes the step where I choose the simplest possible technology. Now, if I were a developer who had plenty of experience building JavaScript-driven single page apps, I might conclude that the simplest route for me would be to render the news via JavaScript. But that would be a fragile starting point if I’m trying to reach as many people as possible (I might well end up building a swishy JavaScript-driven single page app in step three, but step two should almost certainly be good ol’ HTML).

Time and time again, I see decisions that favour developer convenience over user needs. Don’t get me wrong—as a developer, I absolutely want developer convenience …but not at the expense of user needs.

I know that “empathy” is an over-used word in the world of user experience and design, but with good reason. I think we should try to remind ourselves of why we make our architectural decisions by invoking who those decisions benefit. For example, “This tech stack is best option for our team”, or “This solution is the best for the widest range of users.” Then, given the choice, favour user needs in the decision-making process.

There will always be situations where, given time and budget constraints, we end up choosing solutions that are easier for us, but not the best for our users. And that’s okay, as long as we acknowledge that compromise and strive to do better next time.

But when the best solutions for us as developers become enshrined as the best possible solutions, then we are failing the people we serve.

That doesn’t mean we must become hairshirt-wearing martyrs; developer convenience is important …but not as important as user needs. Start with user needs.

Singapore

I was in Singapore last week. It was most relaxing. Sure, it’s Disneyland With The Death Penalty but the food is wonderful.

chicken rice fishball noodles laksa grilled pork

But I wasn’t just there to sample the delights of the hawker centres. I had been invited by Mozilla to join them on the opening leg of their Developer Roadshow. We assembled in the PayPal offices one evening for a rapid-fire round of talks on emerging technologies.

We got an introduction to Quantum, the new rendering engine in Firefox. It’s looking good. And fast. Oh, and we finally get support for input type="date".

But this wasn’t a product pitch. Most of the talks were by non-Mozillians working on the cutting edge of technologies. I kicked things off with a slimmed-down version of my talk on evaluating technology. Then we heard from experts in everything from CSS to VR.

The highlight for me was meeting Hui Jing and watching her presentation on CSS layout. It was fantastic! Entertaining and informative, it was presented with gusto. I think it got everyone in the room very excited about CSS Grid.

The Singapore stop was the only I was able to make, but Hui Jing has been chronicling the whole trip. Sounds like quite a whirlwind tour. I’m so glad I was able to join in even for a portion. Thanks to Sandra and Ali for inviting me along—much appreciated.

I’ll also be speaking at Mozilla’s View Source in London in a few weeks, where I’ll be talking about building blocks of the Indie Web:

In these times of centralised services like Facebook, Twitter, and Medium, having your own website is downright disruptive. If you care about the longevity of your online presence, independent publishing is the way to go. But how can you get all the benefits of those third-party services while still owning your own data? By using the building blocks of the Indie Web, that’s how!

‘Twould be lovely to see you there.

Be progressive

Aaron wrote a great post a little while back called A Fundamental Disconnect. In it, he points to a worldview amongst many modern web developers, who see JavaScript as a universally-available technology in web browsers. They are, in effect, viewing a browser’s JavaScript engine as a runtime environment, and treating web development no different to any other kind of software development.

The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.

Treating JavaScript support in “the browser” as a known quantity is as much of a consensual hallucination as deciding that all viewports are 960 pixels wide. Even that phrasing—“the browser”—shows a framing that’s at odds with the reality of developing for the web; we don’t have to think about “the browser”, we have to think about browsers:

Lakoffian self-correction: if I’m about to talk about doing something “in the browser”, I try to catch myself and say “in browsers” instead.

While we might like think that browsers have all reached a certain level of equilibrium, as Aaron puts it “the Web is messy”:

And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.

Please don’t think that either Aaron or I are saying that you shouldn’t use JavaScript. Far from it! It’s simply a matter of how you wield the power of JavaScript. If you make your core tasks dependent on JavaScript, some of your potential users will inevitably be left out in the cold. But if you start by building on a classic server/client model, and then enhance with JavaScript, you can have your cake and eat it too. Modern browsers get a smooth, rich experience. Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.

Aaron makes the case that, while we cannot control which browsers people will use, we can control the server environment.

Stuart takes issue with that assertion in a post called Fundamentally Disconnected. In it, he points out that the server isn’t quite the controlled environment that Aaron claims:

Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue.

It’s true enough that the server isn’t some rock-solid never-changing environment. Anyone who’s ever had to do install patches or update programming languages knows this. But at least it’s one single environment …whereas the web has an overwhelming multitude of environments; one for every browser/OS/device combination.

Stuart finishes on a stirring note:

The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed.

However he wraps up by saying that…

…the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.

In a post called Missed Connections, Aaron pushes back against that last point:

The fact is that you can’t build a robust Web experience that relies solely on client-side JavaScript.

While JavaScript may technically be available and consistently-implemented across most devices used to access our sites nowadays, we do not control how, when, or even if that JavaScript is ultimately executed.

Stuart responds in a post called Reconnecting (and, by the way, how great is it to see this kind of thoughtful blog-to-blog discussion going on?).

I am, in general and in total agreement with Aaron, opposed to the idea that without JavaScript a web app doesn’t work.

But here’s the problem with progressively enhancing from server functionality to a rich client:

A web app which does not require its client-side scripting, which works on the server and then is progressively enhanced, does not work in an offline environment.

Good point.

Now, at this juncture, I could point out that—by using progressive enhancement—you can still have the best of both worlds. Stuart has anticpated that:

It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.

Ah, there’s the rub!

When I’ve extolled the virtues of progressive enhancement in the past, the pushback I most often receive is on this point. Surely it’s wasteful to build something that works on the server and then reimplement much of it on the client?

Personally, I try not to completely reinvent all the business logic that I’ve already figured out on the server, and then rewrite it all in JavaScript. I prefer to use JavaScript—and specifically Ajax—as a dumb waiter, shuffling data back and forth between the client and server, where the real complexity lies.

I also think that building in this way will take longer …at first. But then on the next project, it takes less time. And on the project after that, it takes less time again. From that perspective, it’s similar to switching from tables for layout to using CSS, or switching from building fixed-with sites to responsive design: the initial learning curve is steep, but then it gets easier over time, until it simply becomes normal.

But fundamentally, Stuart is right. Developers don’t like to violate the DRY principle: Don’t Repeat Yourself. Writing code for the server environment, and then writing very similar code for the browser—I mean browsers—is a bad code smell.

Here’s the harsh truth: building websites with progressive enhancement is not convenient.

Building a client-side web thang that requires JavaScript to work is convenient, especially if you’re using a framework like Angular or Ember. In fact, that’s the main selling point of those frameworks: developer convenience.

The trade-off is that to get that level of developer convenience, you have to sacrifice the universal reach that the web provides, and limit your audience to the browsers that can run a pre-determined level of JavaScript. Many developers are quite willing to make that trade-off.

Developer convenience is a very powerful and important force. I wish that progressive enhancement could provide the same level of developer convenience offered by Angular and Ember, but right now, it doesn’t. Instead, its benefits are focused on the end user, often at the expense of the developer.

Personally, I’m willing to take that hit. I’ve always maintained that, given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time. But I absolutely understand the mindset of developers who choose otherwise.

But perhaps there’s a way to cut this Gordian knot. What if you didn’t need to write your code twice? What if you could write code for the server and then run the very same code on the client?

This is the promise of isomorphic JavaScript. It’s a terrible name for a great idea.

For me, this is the most exciting aspect of Node.js:

With Node.js, a fast, stable server-side JavaScript runtime, we can now make this dream a reality. By creating the appropriate abstractions, we can write our application logic such that it runs on both the server and the client — the definition of isomorphic JavaScript.

Some big players are looking into this idea. It’s the thinking behind AirBnB’s Rendr.

Interestingly, the reason why many large sites are investigating this approach isn’t about universal access—quite often they have separate siloed sites for different device classes. Instead it’s about performance. The problem with having all of your functionality wrapped up in JavaScript on the client is that, until all of that JavaScript has loaded, the user gets absolutely nothing. Compare that to rendering an HTML document sent from the server, and the perceived performance difference is very noticable.

Here’s the ideal situation:

  1. A browser requests a URL.
  2. The server sends HTML, which renders quickly, along with with some mustard-cutting JavaScript.
  3. If the browser doesn’t cut the mustard, or JavaScript fails, fall back to full page refreshes.
  4. If the browser does cut the mustard, keep all the interaction in the client, just like a single page app.

With Node.js on the server, and JavaScript in the client, steps 3 and 4 could theoretically use the same code.

So why aren’t we seeing more of these holy-grail apps that achieve progressive enhancement without code duplication?

Well, partly it’s back to that question of controlling the server environment.

This is something that Nicholas Zakas tackled a year ago when he wrote about Node.js and the new web front-end. He proposes a third layer that sits between the business logic and the rendered output. By applying the idea of isomorphic JavaScript, this interface layer could be run on the server (as Node.js) or on the client (as JavaScript), while still allowing you to have the rest of your server environment running whatever programming language works for you.

It’s still early days for this kind of thinking, and there are lots of stumbling blocks—trying to write JavaScript that can be executed on both the server and the client isn’t so easy. But I’m pretty excited about where this could lead. I love the idea of building in a way that provide the performance and universal access of progressive enhancement, while also providing the developer convenience of JavaScript frameworks.

In the meantime, building with progressive enhancement may have to involve a certain level of inconvenience and duplication of effort. It’s a price I’m willing to pay, but I wish I didn’t have to. And I totally understand that others aren’t willing to pay that price.

But while the mood might currently seem to be in favour of using monolithic JavaScript frameworks to build client-side apps that rely on JavaScript in browsers, I think that the tide might change if we started to see poster children for progressive enhancement.

Three years ago, when I was trying to convince clients and fellow developers that responsive design was the way to go, it was a hard sell. It reminded me of trying to sell the benefits of using web standards instead of using tables for layout. Then, just as the Doug’s redesign of Wired and Mike’s redesign of ESPN helped sell the idea of CSS for layout, the Filament Group’s work on the Boston Globe made it a lot easier to sell the idea of responsive design. Then Paravel designed a responsive Microsoft homepage and the floodgates opened.

Now …who wants to do the same thing for progressive enhancement?

Web Components

The Extensible Web Summit is taking place in Berlin today because Web Components are that important. I wish I could be there, but I’ll make do with the live notes, the IRC channel, and the octothorpe tag.

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous. That’s probably a good sign.

Here’s what I wrote after the last TAG meetup in London:

This really is a radically new and different way of adding features to browsers. In theory, it shifts the balance of power much more to developers (who currently have to hack together everything using JavaScript). If it works, it will be A Good Thing and result in expanding HTML’s vocabulary with genuinely useful features. I fear there may be a rocky transition to this new way of thinking, and I worry about backwards compatibility, but I can’t help but admire the audacity of the plan.

And here’s what I wrote after the Edge conference:

If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once.

To explain…

The exciting thing about Web Components is that they give developers as much power as browser makers.

The frightening thing about Web Components is that they give developers as much power as browser makers.

When browser makers—and other contributors to web standards—team up to hammer out new features in HTML, they have design principles to guide them …at least in theory. First and foremost—because this is the web, not some fly-by-night “platform”—is the issue of compatability:

Support existing content

Degrade gracefully

You can see those principles at work with newly-minted elements like canvas, audio, video where fallback content can be placed between the opening and closing tags so that older user agents aren’t left high and dry (which, in turn, encourages developers to start using these features long before they’re universally supported).

You can see those principles at work in the design of datalist.

You can see those principles at work in the design of new form features which make use of the fact that browsers treat unknown input types as type="text" (again, encouraging developers to start using the new input long before they’re supported in every browser).

When developers are creating new Web Components, they could apply that same amount of thought and care; Chris Scott has demonstrated just such a pattern. Switching to Web Components does not mean abandoning progressive enhancement. If anything they provide the opportunity to create whole new levels of experience.

Web developers could ensure that their Web Components degrade gracefully in older browsers that don’t support Web Components (and no, “just polyfill it” is not a sustainable solution) or, for that matter, situations where JavaScript—for whatever reason—is not available.

Web developers could ensure that their Web Components are accessible, using appropriate ARIA properties.

But I fear that Sturgeon’s Law is going to dominate Web Components. The comparison that’s often cited for Web Components is the creation of jQuery plug-ins. And let’s face it, 90% of jQuery plug-ins are crap.

This wouldn’t matter so much if developers were only shooting themselves in the foot, but because of the wonderful spirit of sharing on the web, we might well end up shooting others in the foot too:

  1. I make something (to solve a problem).
  2. I’m excited about it.
  3. I share it.
  4. Others copy and paste what I’ve made.

Most of the time, that’s absolutely fantastic. But if the copying and pasting happens without critical appraisal, a lot of questionable decisions can get propagated very quickly.

To give you an example…

When Apple introduced the iPhone, it provided a mechanism to specify that a web page shouldn’t be displayed in a zoomed-out view. That mechanism, which Apple pulled out of their ass without going through any kind of standardisation process, was to use the meta element with a name of “viewport”:

<meta name="viewport" value="...">

The value attribute of a meta element takes a comma-separated list of values (think of name="keywords": you provide a comma-separated list of keywords). But in an early tutorial about the viewport value, code was provided which showed values separated with semicolons (like CSS declarations). People copied and pasted that code (which actually did work in Mobile Safari) and so every browser must support that usage:

Many other mobile browsers now support this tag, although it is not part of any web standard. Apple’s documentation does a good job explaining how web developers can use this tag, but we had to do some detective work to figure out exactly how to implement it in Fennec. For example, Safari’s documentation says the content is a “comma-delimited list,” but existing browsers and web pages use any mix of commas, semicolons, and spaces as separators.

Anyway, that’s just one illustration of how code gets shared, copied and pasted. It’s especially crucial during the introduction of a new technology to try to make sure that the code getting passed around is of a high quality.

I feel kind of bad saying this because the introductory phase of any new technology should be a time to say “Hey, go crazy! Try stuff out! See what works and what doesn’t!” but because Web Components are so powerful I think that mindset could end up doing a lot of damage.

Web developers have been given powerful features in the past. Vendor prefixes in CSS were a powerful feature that allowed browsers to push the boundaries of CSS without creating a Tower of Babel of propietary properties. But because developers just copied and pasted code, browser makers are now having to support prefixes that were originally scoped to different rendering engines. That’s not the fault of the browser makers. That’s the fault of web developers.

With Web Components, we are being given a lot of rope. We can either hang ourselves with it, or we can make awesome …rope …structures …out of rope this analogy really isn’t working.

I’m not suggesting we have some kind of central authority that gets to sit in judgement on which Web Components pass muster (although Addy’s FIRST principles are a great starting point). Instead I think a web of trust will emerge.

If I see a Web Component published by somebody at Paciello Group, I can be pretty sure that it will be accessible. Likewise, if Christian publishes a Web Component, it’s a good bet that it will use progressive enhancement. And if any of the superhumans at Filament Group share a Web Component, it’s bound to be accessible, performant, and well thought-out.

Because—as is so often the case on the web—it’s not really about technologies at all. It’s about people.

And it’s precisely because it’s about people that I’m so excited about Web Components …and simultaneously so nervous about Web Components.

Relations

When I was writing about browser-developer relations yesterday, I took this little dig at Safari:

Apple, of course, dodges the issue entirely by having absolutely zero developer relations when it comes to their browser.

A friend of mine who works at Apple took me to task about this on Twitter (not in the public timeline, of course, but by direct message). I was told I was being unfair. After all, wasn’t I aware of Vicki Murley, Safari Technologies Evangelist? I had to admit that I wasn’t.

“What’s her URL?” I asked.

“URL?”

“Of her blog.”

“She doesn’t have one.”

That might explain why I hadn’t heard of her. Nor have I seen her at any conferences; not at the Browser Wars panels at South by Southwest, nor at the browser panels at Mobilsm.

The Safari Technologies Evangelist actually does speak at one conference: WWDC. And the videos from that conference are available online …if you sign on the dotted line.

Now, I’m not saying that being in developer relations for a browser vendor means that you must blog or must go to conferences. But some kind of public visibility is surely desirable, right? Not at Apple.

I remember a couple of years back, meeting the Safari evangelist for the UK. He came down to Brighton to have lunch with me and some of the other Clearlefties. I remember telling him that I could put him touch with the organisers of some mobile-focused conferences because he’d be the perfect speaker.

“Yeah,” he said, “I’m not actually allowed to speak at conferences.”

An evangelist who isn’t allowed to evangelise. That seems kind of crazy to me …and I can only assume that it’s immensely frustrating for them. But in the case of Apple, we tend to just shrug our shoulders and say, “Oh, well. That’s Apple. That’s just the way it is.”

Back when I was soliciting questions for this year’s browser panel at Mobilism, Remy left a little rant that began:

When are we, as a web development community, going to stop giving Apple a free fucking pass? They’re consistently lacking in the open discussion in to improving the gateway to the web: the browser.

And he ended:

Even the mighty PPK who tells entire browser vendors “fuck you”, doesn’t call Apple out, allowing them to slither on. Why is it we continue to allow Apple to get away with it? And can this ever change?

When I next saw Remy, I chuckled and said something along the usual lines of “Hey, isn’t that just the way it is at Apple?” And then Remy told me something that made me rethink my defeatist accepting attitude.

He reminded me about the post on Daring Fireball where John describes the sneak peak he was given of Mountain Lion:

But this, I say, waving around at the room, this feels a little odd. I’m getting the presentation from an Apple announcement event without the event. I’ve already been told that I’ll be going home with an early developer preview release of Mountain Lion. I’ve never been at a meeting like this, and I’ve never heard of Apple seeding writers with an as-yet-unannounced major update to an operating system. Apple is not exactly known for sharing details of as-yet-unannounced products, even if only just one week in advance. Why not hold an event to announce Mountain Lion — or make the announcement on apple.com before talking to us?

That’s when Schiller tells me they’re doing some things differently now.

And that, said Remy, is exactly why now is the time to start pushing back against Apple’s opaque developer relations strategy when it comes to Safari: they’re doing some things differently now.

He’s right.

Apple’s culture of secrecy has served them very, very well for some things—like hardware—but it’s completely at odds with the spirit of the web. That culture clash is most evident with Safari; not just a web browser, but a web browser built on the open-source Webkit platform.

I’m sure that Vicki Murley is great at her job. But her job will remain limited as long as she is hampered by the legacy of Apple’s culture.

That culture of secrecy is not written in stone. It can change. It should change. And the time for that change is now.

Darwinian webolution

Odeo have released an embedded recorder that you can add to your own webpages.

Del.icio.us now offers private bookmarks.

Flickr now marks up profiles using the hCard microformat.

viewing source on my Flickr profile

Something that became very clear — both at the Carson Workshops Summit and at the many web app panels at South by Southwest — is that websites like these are never finished. Instead, the site evolves, growing (and occasionally dropping) features over time.

Traditionally, the mental model for websites has been architectural. Even the term itself, website, invites a construction site comparison. Plans are drawn up and approved, then the thing gets built, then it’s done.

That approach doesn’t apply to the newer, smarter websites that are dominating the scene today. Heck, it doesn’t even apply to older websites like Amazon and Google who have always been smart about constantly iterating changes.

Steve Balmer was onto something when he said “developers, developers, developers, ad nauseam”. Websites, like Soylent Green, are people. Without the people improving and tweaking things, the edifice of the site structure will crack.

I’m going to make a conscious effort to stop thinking about the work I do on the Web in terms of building and construction: I need to find new analogies from the world of biology.

Update: Paul Hammond told me via IM about a book called “How Buildings Learn: What happens after they’re built”. Maybe I don’t need to abandon the architectural analogies completely.