Journal tags: dom

11

event.target.closest

Eric mentioned the JavaScript closest method. I use it all the time.

When I wrote the book DOM Scripting back in 2005, I’d estimate that 90% of the JavaScript I was writing boiled down to:

  1. Find these particular elements in the DOM and
  2. When the user clicks on one of them, do something.

It wasn’t just me either. I reckon that was 90% of most JavaScript on the web: progressive disclosure widgets, accordions, carousels, and so on.

That’s one of the reasons why jQuery became so popular. That first step (“find these particular elements in the DOM”) used to be a finicky affair involving getElementsByTagName, getElementById, and other long-winded DOM methods. jQuery came along and allowed us to use CSS selectors.

These days, we don’t need jQuery for that because we’ve got querySelector and querySelectorAll (and we can thank jQuery for their existence).

Let’s say you want to add some behaviour to every button element with a class of special. Or maybe you use a data- attribute instead of the class attribute; the same principle applies. You want something special to happen when the user clicks on one of those buttons.

  1. Use querySelectorAll('button.special') to get a list of all the right elements,
  2. Loop through the list, and
  3. Attach addEventListener('click') to each element.

That’s fine for a while. But if you’ve got a lot of special buttons, you’ve now got a lot of event listeners. You might be asking the browser to do a lot of work.

There’s another complication. The code you’ve written runs once, when the page loads. Suppose the contents of the page have changed in the meantime. Maybe elements are swapped in and out using Ajax. If a new special button shows up on the page, it won’t have an event handler attached to it.

You can switch things around. Instead of adding lots of event handlers to lots of elements, you can add one event handler to the root element. Then figure out whether the element that just got clicked is special or not.

That’s where closest comes in. It makes this kind of event handling pretty straightforward.

To start with, attach the event listener to the document:

document.addEventListener('click', doSomethingSpecial, false);

That function doSomethingSpecial will be executed whenever the user clicks on anything. Meanwhile, if the contents of the document are updated via Ajax, no problem!

Use the closest method in combination with the target property of the event to figure out whether that click was something you’re interested in:

function doSomethingSpecial(event) {
  if (event.target.closest('button.special')) {
    // do something
  }
}

There you go. Like querySelectorAll, the closest method takes a CSS selector—thanks again, jQuery!

Oh, and if you want to reduce the nesting inside that function, you can reverse the logic and return early like this:

function doSomethingSpecial(event) {
  if (!event.target.closest('button.special')) return;
  // do something
}

There’s a similar method to closest called matches. But that will only work if the user clicks directly on the element you’re interested in. If the element is nested within other elements, matches might not work, but closest will.

Like Eric said:

Very nice.

Submitting a form with datalist

I’m a big fan of HTML5’s datalist element and its elegant design. It’s a way to progressively enhance any input element into a combobox.

You use the list attribute on the input element to point to the ID of the associated datalist element.

<label for="homeworld">Your home planet</label>
<input type="text" name="homeworld" id="homeworld" list="planets">
<datalist id="planets">
 <option value="Mercury">
 <option value="Venus">
 <option value="Earth">
 <option value="Mars">
 <option value="Jupiter">
 <option value="Saturn">
 <option value="Uranus">
 <option value="Neptune">
</datalist>

It even works on input type="color", which is pretty cool!

The most common use case is as an autocomplete widget. That’s how I’m using it over on The Session, where the datalist is updated via Ajax every time the input is updated.

But let’s stick with a simple example, like the list of planets above. Suppose the user types “jup” …the datalist will show “Jupiter” as an option. The user can click on that option to automatically complete their input.

It would be handy if you could automatically submit the form when the user chooses a datalist option like this.

Well, tough luck.

The datalist element emits no events. There’s no way of telling if it has been clicked. This is something I’ve been trying to find a workaround for.

I got my hopes up when I read Amber’s excellent article about document.activeElement. But no, the focus stays on the input when the user clicks on an option in a datalist.

So if I can’t detect whether a datalist has been used, this best I can do is try to infer it. I know it’s not exactly the same thing, and it won’t be as reliable as true detection, but here’s my logic:

  • Keep track of the character count in the input element.
  • Every time the input is updated in any way, check the current character count against the last character count.
  • If the difference is greater than one, something interesting happened! Maybe the user pasted a value in …or maybe they used the datalist.
  • Loop through each of the options in the datalist.
  • If there’s an exact match with the current value of the input element, chances are the user chose that option from the datalist.
  • So submit the form!

Here’s how that translates into DOM scripting code:

document.querySelectorAll('input[list]').forEach( function (formfield) {
  var datalist = document.getElementById(formfield.getAttribute('list'));
  var lastlength = formfield.value.length;
  var checkInputValue = function (inputValue) {
    if (inputValue.length - lastlength > 1) {
      datalist.querySelectorAll('option').forEach( function (item) {
        if (item.value === inputValue) {
          formfield.form.submit();
        }
      });
    }
    lastlength = inputValue.length;
  };
  formfield.addEventListener('input', function () {
    checkInputValue(this.value);
  }, false);
});

I’ve made a gist with some added feature detection and mustard-cutting at the start. You should be able to drop it into just about any page that’s using datalist. It works even if the options in the datalist are dynamically updated, like the example on The Session.

It’s not foolproof. The inference relies on the difference between what was previously typed and what’s autocompleted to be more than one character. So in the planets example, if someone has type “Jupite” and then they choose “Jupiter” from the datalist, the form won’t automatically submit.

But still, I reckon it covers most common use cases. And like the datalist element itself, you can consider this functionality a progressive enhancement.

Custom properties

I made the website for the Clearleft podcast last week. The design is mostly lifted straight from the rest of the Clearleft website. The main difference is the masthead. If the browser window is wide enough, there’s a background image on the right hand side.

I mostly added that because I felt like the design was a bit imbalanced without something there. On the home page, it’s a picture of me. Kind of cheesy. But the image can be swapped out. On other pages, there are different photos. All it takes is a different class name on that masthead.

I thought about having the image be completely random (and I still might end up doing this). I’d need to use a bit of JavaScript to choose a class name at random from a list of possible values. Something like this:

var names = ['jeremy','katie','rich','helen','trys','chris'];
var name = names[Math.floor(Math.random() * names.length)];
document.querySelector('.masthead').classList.add(name);

(You could paste that into the dev tools console to see it in action on the podcast site.)

Then I read something completely unrelated. Cassie wrote a fantastic article on her site called Making lil’ me - part 1. In it, she describes how she made the mouse-triggered animation of her avatar in the footer of her home page.

It’s such a well-written technical article. She explains the logic of what she’s doing, and translates that logic into code. Then, after walking you through the native code, she shows how you could use the Greeksock library to achieve the same effect. That’s the way to do it! Instead of saying, “Here’s a library that will save you time—don’t worry about how it works!”, she’s saying “Here’s it works without a library; here’s how it works with a library; now you can make an informed choice about what to use.” It’s a very empowering approach.

Anyway, in the article, Cassie demonstrates how you can use custom properties as a bridge between JavaScript and CSS. JavaScript reads the mouse position and updates some custom properties accordingly. Those same custom properties are used in CSS for positioning. Voila! Now you’ve got the position of an element responding to mouse movements.

That’s what made me think of the code snippet I wrote above to update a class name from JavaScript. I automatically thought of updating a class name because, frankly, that’s how I’ve always done it. I’d say about 90% of the DOM scripting I’ve ever done involves toggling the presence of class values: accordions, fly-out menus, tool-tips, and other progressive disclosure patterns.

That’s fine. But really, I should try to avoid touching the DOM at all. It can have performance implications, possibly triggering unnecessary repaints and reflows.

Now with custom properties, there’s a direct line of communication between JavaScript and CSS. No need to use the HTML as a courier.

This made me realise that I need to be aware of automatically reaching for a solution just because that’s the way I’ve done something in the past. I should step back and think about the more efficient solutions that are possible now.

It also made me realise that “CSS variables” is a very limiting way of thinking about custom properties. The fact that they can be updated in real time—in CSS or JavaScript—makes them much more powerful than, say, Sass variables (which are more like constants).

But I too have been guilty of underselling them. I almost always refer to them as “CSS custom properties” …but a lot of their potential comes from the fact that they’re not confined to CSS. From now on, I’m going to try calling them custom properties, without any qualification.

A tiny lesson in query selection

We have a saying at Clearleft:

Everything is a tiny lesson.

I bet you learn something new every day, even if it’s something small. These small tips and techniques can easily get lost. They seem almost not worth sharing. But it’s the small stuff that takes the least effort to share, and often provides the most reward for someone else out there. Take for example, this great tip for getting assets out of Sketch that Cassie shared with me.

Cassie was working on a piece of JavaScript yesterday when we spotted a tiny lesson that tripped up both of us. The script was a fairly straightforward piece of DOM scripting. As a general rule, we do a sort of feature detection near the start of the script. Let’s say you’re using querySelector to get a reference to an element in the DOM:

var someElement = document.querySelector('.someClass');

Before going any further, check to make sure that the reference isn’t falsey (in other words, make sure that DOM node actually exists):

if (!someElement) return;

That will exit the script if there’s no element with a class of someClass on the page.

The situation that tripped us up was like this:

var myLinks = document.querySelectorAll('a.someClass');

if (!myLinks) return;

That should exit the script if there are no A elements with a class of someClass, right?

As it turns out, querySelectorAll is subtly different to querySelector. If you give querySelector a reference to non-existent element, it will return a value of null (I think). But querySelectorAll always returns an array (well, technically it’s a NodeList but same difference mostly). So if the selector you pass to querySelectorAll doesn’t match anything, it still returns an array, but the array is empty. That means instead of just testing for its existence, you need to test that it’s not empty by checking its length property:

if (!myLinks.length) return;

That’s a tiny lesson.

Teaching in Porto, day three

Day two ended with a bit of a cliffhanger as I had the students mark up a document, but not yet style it. In the morning of day three, the styling began.

Rather than just treat “styling” as one big monolithic task, I broke it down into typography, colour, negative space, and so on. We time-boxed each one of those parts of the visual design. So everyone got, say, fifteen minutes to write styles relating to font families and sizes, then another fifteen minutes to write styles for colours and background colours. Bit by bit, the styles were layered on.

When it came to layout, we closed the laptops and returned to paper. Everyone did a quick round of 6-up sketching so that there was plenty of fast iteration on layout ideas. That was followed by some critique and dot-voting of the sketches.

Rather than diving into the CSS for layout—which can get quite complex—I instead walked through the approach for layout; namely putting all your layout styles inside media queries. To explain media queries, I first explained media types and then introduced the query part.

I felt pretty confident that I could skip over the nitty-gritty of media queries and cross-device layout because the next masterclass that will be taught at the New Digital School will be a week of responsive design, taught by Vitaly. I just gave them a taster—Vitaly can dive deeper.

By lunch time, I felt that we had covered CSS pretty well. After lunch it was time for the really challenging part: JavaScript.

The reason why I think JavaScript is challenging is that it’s inherently more complex than HTML or CSS. Those are declarative languages with fairly basic concepts at heart (elements, attributes, selectors, etc.), whereas an imperative language like JavaScript means entering the territory of logic, loops, variables, arrays, objects, and so on. I really didn’t want to get stuck in the weeds with that stuff.

I focused on the combination of JavaScript and the Document Object Model as a way of manipulating the HTML and CSS that’s already inside a browser. A lot of that boils down to this pattern:

When (some event happens), then (take this action).

We brainstormed some examples of this e.g. “When the user submits a form, then show a modal dialogue with an acknowledgement.” I then encouraged them to write a script …but I don’t mean a script in the JavaScript sense; I mean a script in the screenwriting or theatre sense. Line by line, write out each step that you want to accomplish. Once you’ve done that, translate each line of your English (or Portuguese) script into JavaScript.

I did quick demo as a proof of concept (which, much to my surprise, actually worked first time), but I was at pains to point out that they didn’t need to remember the syntax or vocabulary of the script; it was much more important to have a clear understanding of the thinking behind it.

With the remaining time left in the day, we ran through the many browser APIs available to JavaScript, from the relatively simple—like querySelector and Ajax—right up to the latest device APIs. I think I got the message across that, using JavaScript, there’s practically no limit to what you can do on the web these days …but the trick is to use that power responsibly.

At this point, we’ve had three days and we’ve covered three layers of web technologies: HTML, CSS, and JavaScript. Tomorrow we’ll take a step back from the nitty-gritty of the code. It’s going to be all about how to plan and think about building for the web before a single line of code gets written.

This is for everyone with a certificate

Mozilla—like Google before them—have announced their plans for deprecating HTTP in favour of HTTPS. I’m all in favour of moving to HTTPS. I’ve done it myself here on adactio.com, on thesession.org, and on huffduffer.com. I have some concerns about the potential linkrot involved in the move to TLS everywhere—as outlined by Tim Berners-Lee—but still, anything that makes the work of GCHQ and the NSA more difficult is alright by me.

But I have a big, big problem with Mozilla’s plan to “encourage” the move to HTTPS:

Gradually phasing out access to browser features.

Requiring HTTPS for certain browser features makes total sense, given the security implications. Service Workers, for example, are quite correctly only available over HTTPS. Any API that has access to a device sensor—or that could be used for fingerprinting in any way—should only be available over HTTPS. In retrospect, Geolocation should have been HTTPS-only from the beginning.

But to deny access to APIs where there are no security concerns, where it is merely a stick to beat people with …that’s just wrong.

This is for everyone. Not just those smart enough to figure out how to add HTTPS to their site. And yes, I know, the theory is that is that it’s going to get easier and easier, but so far the steps towards making HTTPS easier are just vapourware. That makes Mozilla’s plan look like something drafted by underwear gnomes.

The issue here is timing. Let’s make HTTPS easy first. Then we can start to talk about ways of encouraging adoption. Hopefully we can figure out a way that doesn’t require Mozilla or Google as gatekeepers.

Sven Slootweg outlines the problems with Mozilla’s forced SSL. I highly recommend reading Yoav’s post on deprecating HTTP too. Ben Klemens has written about HTTPS: the end of an era …that era being the one in which anyone could make a website without having to ask permission from an app store, a certificate authority, or a browser manufacturer.

On the other hand, Eric Mill wrote We’re Deprecating HTTP And It’s Going To Be Okay. It makes for an extremely infuriating read because it outlines all the ways in which HTTPS is a good thing (all of which I agree with) without once addressing the issue at hand—a browser that deliberately cripples its feature set for political reasons.

DOM Scripting, second edition

You may have noticed that there’s a second edition of DOM Scripting out. I can’t take any credit for it; I had very little to do with it. But I’m happy to report that the additions meet with my approval.

I’ve written about it on the DOM Scripting blog if you want all the details on what’s new. In short, all the updates are good ones for a book that’s now five years old.

If you’ve already got the first edition, you probably don’t need this update, but if you’re looking for a good introduction to JavaScript with a good smattering of Ajax and HTML5, and an additional dollop of jQuery, the second edition of DOM Scripting will serve you well.

The cover of the second edition, alas, is considerably shittier than the first edition. So don’t judge the second edition of a book by its cover.

Linkrot

The geeks of the UK have been enjoying a prime-time television show dedicated to the all things webby. Virtual Revoltution is a rare thing: a television programme about the web made by someone who actually understands the web (Aleks, to be precise).

Still, the four-part series does rely on the usual television documentary trope of presenting its subject matter as a series of yin and yang possibilities. The web: blessing or curse? The web: force for democracy or tool of oppression? Rhetorical questions: a necessary evil or an evil necessity?

The third episode tackles one of the most serious of society’s concerns about our brave new online world, namely the increasing amount of information available to commercial interests and the associated fear that technology is having a negative effect on privacy. Personally, I’m with Matt when he says:

If the end of privacy comes about, it’s because we misunderstand the current changes as the end of privacy, and make the mistake of encoding this misunderstanding into technology. It’s not the end of privacy because of these new visibilities, but it may be the end of privacy because it looks like the end of privacy because of these new visibilities*.

Inevitably, whenever there’s a moral panic about the web, a truism that raises its head is the assertion that The Internets Never Forget:

On the one hand, the Internet can freeze youthful folly and a small transgressions can stick with you for life. So that picture of you drunk and passed out in a skip, or that heated argument you had on a mailing list when you were twenty can come back and haunt you.

Citation needed.

We seem to have a collective fundamental attribution error when it comes to the longevity of data on the web. While we are very quick to recall the instances when a resource remains addressable for a long enough time period to cause embarrassment or shame later on, we completely ignore all the link rot and 404s that is the fate of most data on the web.

There is an inverse relationship between the age of a resource and its longevity. You are one hundred times more likely to find an embarrassing picture of you on the web uploaded in the last year than to find an embarrassing picture of you uploaded ten years ago.

If a potential boss finds a ten-year old picture of you drunk and passed out at a party, that’s certainly a cause for concern. But such an event would be extraordinary rather than commonplace. If that situation ever happened to me, I would probably feel outrage and indignation like anybody else, but I bet that I would also wonder Hmmm, where’s that picture being hosted? Sounds like a good place for off-site backups.

The majority of data uploaded to the web will disappear. But we don’t pay attention to the disappearances. We pay attention to the minority of instances when data survives.

This isn’t anything specific to the web; this is just the way we human beings operate. It doesn’t matter if the national statistics show a decrease in crime; if someone is mugged on your street, you’ll probably be worried about increased crime. It doesn’t matter how many airplanes successfully take off and land; one airplane crash in ten thousand is enough to make us very worried about dying on a plane trip. It makes sense that we’ve taken this cognitive bias with us onto the web.

As for why resources on the web tend to disappear over time, there are two possible reasons:

  1. The resource is being hosted on a third-party site or
  2. The resource is being hosted on an independent site.

The problem with the first instance is obvious. A commercial third-party responsible for hosting someone else’s hopes and dreams will pull the plug as soon as the finances stop adding up.

I’m sure you’ve seen the famous chart of Web 2.0 logos but have seen Meg Pickard’s updated version, adjusted for dead companies?

You cannot rely on a third-party service for data longevity, whether it’s Geocities, Magnolia, Pownce, or anything else.

That leaves you with The Pemberton Option: host your own data.

This is where the web excels: distributed and decentralised data linked together with hypertext. You can still ping third-party sites and allow them access to your data, but crucially, you are in control of the canonical copy (Tantek is currently doing just that, microblogging on his own site and sending copies to Twitter).

Distributed HTML, addressable by URL and available through HTTP: it’s a beautiful ballet that creates the network effects that makes the web such a wonderful creation. There’s just one problem and it lies with the URL portion of the equation.

Domain names aren’t bought, they are rented. Nobody owns domain names, except ICANN. While you get to decide the relative structure of URLs on your site, everything between the colon slash slash and the subsequent slash belongs to ICANN. Centralised. Not distributed.

Cool URIs don’t change but even with the best will in the world, there’s only so much we can do when we are tenants rather than owners of our domains.

In his book Weaving The Web, Sir Tim Berners-Lee mentions that exposing URLs in the browser interface was a throwaway decision, a feature that would probably only be of interest to power users. It’s strange to imagine what the web would be like if we used IP numbers rather than domain names—more like a phone system than a postal system.

But in the age of Google, perhaps domain names aren’t quite as important as they once were. In Japanese advertising, URLs are totally out. Instead they show search boxes with recommended search terms.

I’m not saying that we should ditch domain names. But there’s something fundamentally flawed about a system that thinks about domain names in time periods as short as a year or two. It doesn’t bode well for the long-term stability of our data on the web.

On the plus side, that embarrassing picture of you passed out at a party will inevitably disappear …along with almost everything else on the web.

Righting copywrongs

This week, when I’m not battling the zombies of the linkrot apocalypse with a squirrel, I’m preparing my presentation for Bamboo Juice. I wasted far too much time this morning watching the ancillary material from the BBC’s The Speaker in the vain hope that it might help my upcoming public speaking engagement.

My talk is going to be a long zoom presentation along the lines of Open Data and The Long Web. I should concentrate on technologies, standards and file formats but I find myself inevitably being drawn in to the issue of copyright and the current ludicrous state of things.

If you feel like getting as riled up as I am, be sure to listen to James Boyle as he speaks at the RSA or is interviewed on CBC. Or you could just cut to the chase and read his book, The Public Domain. If you want to try before you buy, you can read the entire book online in PDF or HTML format—I recommend reading that version with the help of the fantastic .

As if any proof were needed that this is an important, current, relevant issue, Tom reminds me that the future of our culture is under threat again tomorrow. I have duly written to some of my MEPs. Fortunately, I have a most excellent representative:

We’re talking about a gigantic windfall for a few multinational companies, taking millions of pounds from the pockets of consumers and giving it to the record labels. Also, the artistic cost of making songs from the last 50 years public property, thus allowing endless sampling by DJs and other artists, must be taken into consideration.

The UK Greens are committed to a system known as Creative Commons, which offers a flexible range of protections and freedoms for authors and artists. We want to encourage innovation and prevent large corporations from controlling and benefitting from our cultural legacy.

Ajax workshop in NYC

On July 6th I’ll be presenting an all day Ajax and DOM Scripting workshop in New York with Carson Workshops.

A few days later, on the 10th and 11th of July, An Event Apart NYC comes to town. Why not make a week of it? If you’re coming along to AEA, you might want to arrive a few days early for the workshop.

The Ajax workshop costs $495 and will be held at the Digital Sandbox. Registration for An Event Apart costs $1095. It will be held at Scandinavia House.

My previous workshops in London and Manchester were a lot of fun and garnered plenty of praise so I’m really excited about taking the show to New York. If you live in or near New York city, come along for a day of Ajaxy goodness and come away with a Neo-like “I know Kung-Fu” awareness of DOM Scripting.

Oh, and If you sign up now, you’ll also get a copy of my book.

Random

The party shuffle feature in iTunes is supposed to create a random playlist of songs. Oh yeah? Then how come, out of 6,435 songs, it manages to choose the exact same song performed by two different bands one after the other?

Sick Of Goodbyes in the iTunes Party Shuffle list

Update: The question is rhetorical. The fact that coincidences like this occur is in fact proof that the shuffling is truly random. If there were no coincidences, that would be suspicious. The Cederholm-Fugazi effect is another example. It’s just that, as Daniel Gilbert says, we notice things that are memorable and filter out the vast majority.