Journal tags: linking

14

Directory enquiries

I was having a discussion with some of my peers a little while back. We were collectively commenting on the state of education and documentation for front-end development.

A lot of the old stalwarts have fallen by the wayside of late. CSS Tricks hasn’t been the same since it got bought out by Digital Ocean. A List Apart goes through fallow periods. Even the Mozilla Developer Network is looking to squander its trust by adding inaccurate “content” generated by a large language model.

The most obvious solution is to start up a brand new resource for front-end developers. But there are two probems with that:

  1. It’s really, really, really hard work, and
  2. It feels a bit 927.

I actually think there are plenty of good articles and resources on front-end development being published. But they’re not being published in any one specific place. People are publishing them on their own websites.

Ahmed, Josh, Stephanie, Andy, Lea, Rachel, Robin, Michelle …I could go on, but you get the picture.

All this wonderful stuff is distributed across the web. If you have a well-stocked RSS reader, you’re all set. But if you’re new to front-end development, how do you know where to find this stuff? I don’t think you can rely on search, unless you have a taste for slop.

I think the solution lies not with some hand-wavey “AI” algorithm that burns a forest for every query. I think the solution lies with human curation.

I take inspiration from Phil’s fantastic project, ooh.directory. Imagine taking that idea of categorisation and applying it to front-end dev resources.

Whether it’s a post on web.dev, Smashing Magazine, or someone’s personal site, it could be included and categorised appropriately.

Now, there would still be a lot of work involved, especially in listing and categorising the articles that are already out there, but it wouldn’t be nearly as much work as trying to create those articles from scratch.

I don’t know what the categories should be. Does it make sense to have top-level categories for HTML, CSS, and JavaScript, with sub-directories within them? Or does it make more sense to categorise by topics like accessibility, animation, and so on?

And this being the web, there’s no reason why one article couldn’t be tagged to simultaneously live in multiple categories.

There’s plenty of meaty information architecture work to be done. And there’d be no shortage of ongoing work to handle new submissions.

A stretch goal could be the creation of “playlists” of hand-picked articles. “Want to get started with CSS grid layout? Read that article over there, watch this YouTube video, and study this page on MDN.”

What do you think? Does this one-stop shop of hyperlinks sound like it would be useful? Does it sound feasible?

I’m just throwing this out there. I’d love it if someone were to run with it.

Linking

One of the first ever personal websites—long before the word “blog” was a mischievous gleam in Peter’s eye—was Justin Hall’s links.net. Linking was right there in the domain name.

I really enjoy sharing links on my website. It feels good to point to something and say, “Hey, check this out!”

Other people are doing it too.

Then there are some relatively new additions to the linking gang:

There are more out there for you to discover and add to your feed reader of choice. Good link hunting!

Stuck in the dock

I was impressed with how Safari now allows you to add websites to the dock:

It feels great to have websites that act just like other apps. I like that some of the icons in my dock are native, some are web apps, and I mostly don’t notice the difference.

Trys liked it too:

For all intents and purposes, this is a desktop application created without a single line of Swift or Objective-C, or any heavy Electron wrappers.

Oh, and the application can work offline! Service workers, and browser storage are more than stable enough to handle a variety of offline loading patterns. These are truly exciting times to be building for the web!

There was one aspect that I was particularly pleased with. External links:

Links within a Safari-installed web app respect your default browser choice.

Excellent! Except it’s no longer true. At least not in some cases. The behaviour is inconsisent but I’m running the latest version of Safari on the latest version of Sonoma, and now external links in a Safari-installed web app are broken. They just stay in the same application.

I thought maybe it was related to whether the website’s manifest file has the display value set to “standalone” rather than, say, “minimal ui”. Maybe the “standalone” instruction is being taken literally? But even when I change the value I’m still getting the broken behaviour.

This may sound like a small thing, but it completely changes the feel of using the web app. Instead of feeling like “I’m using an app that just happens to be on the web”, it now feels like “I’m using a web browser but with fewer features.”

I’ve been loving having Mastodon as a standalone app in my dock. It used to be that if I clicked on a link in a Mastodon post, it would open in my browser of choice (Firefox) where I could then bookmark it, or do any other tasks that my browser offers me. Now if I click on a link in Mastodon, I’m stuck in the same “app”. It feels horribly stifling.

I can right-click on a link and get options that still keep me in the same app, like “Open link” or “Open Link in New Window.” To actually open the link in my web browser, I have to select “Copy Link”, then go to my web browser, open a new tab, and paste the link in there.

This is broken. I hope it isn’t intentional. Maybe I’m just at the receiving end of some weird glitch. If this stays this way, I’ll probably just remove the Safari-installed web apps from my dock. They feel pointless if they’re just roach motels.

I’d love to file a bug for this, but this isn’t a Webkit bug, it’s a Safari bug (and the Webkit bug tracker is at pains to point out that Webkit and Safari are not the same thing). But have you ever tried to file a bug with Apple? Good luck!

Anyway, I sincerely hope that this change will be walked back. Otherwise websites in the dock are dead in the water.

Hope

My last long-distance trip before we were all grounded by The Situation was to San Francisco at the end of 2019. I attended Indie Web Camp while I was there, which gave me the opportunity to add a little something to my website: an “on this day” page.

I’m glad I did. While it’s probably of little interest to anyone else, I enjoy scrolling back to see how the same date unfolded over the years.

’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.

But when I scroll down through my “on this day” page, it also feels like descending deeper into the dark waters of linkrot. For each year back in time, the probability of a link still working decreases until there’s nothing but decay.

Sadly this is nothing new. I’ve been lamenting the state of digital preservation for years now. More recently Jonathan Zittrain penned an article in The Atlantic on the topic:

Too much has been lost already. The glue that holds humanity’s knowledge together is coming undone.

In one sense, linkrot is the price we pay for the web’s particular system of hypertext. We don’t have two-way linking, which means there’s no centralised repository of links which would be prohibitively complex to maintain. So when you want to link to something on the web, you just do it. An a element with an href attribute. That’s it. You don’t need to check with the owner of the resource you’re linking to. You don’t need to check with anyone. You have complete freedom to link to any URL you want to.

But it’s that same simple system that makes the act of linking a gamble. If the URL you’ve linked to goes away, you’ll have no way of knowing.

As I scroll down my “on this day” page, I come across more and more dead links that have been snapped off from the fabric of the web.

If I stop and think about it, it can get quite dispiriting. Why bother making hyperlinks at all? It’s only a matter of time until those links break.

And yet I still keep linking. I still keep pointing to things and saying “Check this out!” even though I know that over a long enough timescale, there’s little chance that the link will hold.

In a sense, every hyperlink on the World Wide Web is little act of hope. Even though I know that when I link to something, it probably won’t last, I still harbour that hope.

If hyperlinks are built on hope, and the web is made of hyperlinks, then in a way, the World Wide Web is quite literally made out of hope.

I like that.

Associative trails

Matt wrote recently about how different writers keep notes:

I’m also reminded of how writers I love and respect maintain their own reservoirs of knowledge, complete with migratory paths down from the mountains.

I have a section of my site called “notes” but the truth is that every single thing I post on here—whether it’s a link, a blog post, or anything else—is really a “note to self.”

When it comes to retrieving information from this online memex of mine, I use tags. I’ve got search forms on my site, but usually I’ll go to the address bar in my browser instead and think “now, what would past me have tagged that with…” as I type adactio.com/tags/... (or, if I want to be more specific, adactio.com/links/tags/... or adactio.com/journal/tags/...).

It’s very satisfying to use my website as a back-up brain like this. I can get stuff out of my head and squirreled away, but still have it available for quick recall when I want it. It’s especially satisfying when I’m talking to someone else and something they say reminds me of something relevant, and I can go “Oh, let me send you this link…” as I retrieve the tagged item in question.

But I don’t think about other people when I’m adding something to my website. My audience is myself.

I know there’s lots of advice out there about considering your audience when you write, but when it comes to my personal site, I’d find that crippling. It would be one more admonishment from the inner critic whispering “no one’s interested in that”, “you have nothing new to add to this topic”, and “you’re not quailified to write about this.” If I’m writing for myself, then it’s easier to have fewer inhibitions. By treating everything as a scrappy note-to-self, I can avoid agonising about quality control …although I still spend far too long trying to come up with titles for posts.

I’ve noticed—and other bloggers have corroborated this—there’s no correlation whatsover between the amount of time you put into something and how much it’s going to resonate with people. You might spend days putting together a thoroughly-researched article only to have it met with tumbleweeds when you finally publish it. Or you might bash something out late at night after a few beers only to find it on the front page of various aggregators the next morning.

If someone else gets some value from a quick blog post that I dash off here, that’s always a pleasant surprise. It’s a bonus. But it’s not my reason for writing. My website is primarily a tool and a library for myself. It just happens to also be public.

I’m pretty sure that nobody but me uses the tags I add to my links and blog posts, and that’s fine with me. It’s very much a folksonomy.

Likewise, there’s a feature I added to my blog posts recently that is probably only of interest to me. Under each blog post, there’s a heading saying “Previously on this day” followed by links to any blog posts published on the same date in previous years. I find it absolutely fascinating to spelunk down those hyperlink potholes, but I’m sure for anyone else it’s about as interesting as a slideshow of holiday photos.

Matt took this further by adding an “on this day” URL to his site. What a great idea! I’ve now done the same here:

adactio.com/archive/onthisday

That URL is almost certainly only of interest to me. And that’s fine.

Cascading Style Sheets

There are three ways—that I know of—to associate styles with markup.

External CSS

This is probably the most common. Using a link element with a rel value of “stylesheet”, you point to a URL using the href attribute. That URL is a style sheet that is applied to the current document (“the relationship of the linked resource it is that is a ‘stylesheet’ for the current document”).

<link rel="stylesheet" href="/path/to/styles.css">

In theory you could associate a style sheet with a document using an HTTP header, but I don’t think many browsers support this in practice.

You can also pull in external style sheets using the @import declaration in CSS itself, as long as the @import rule is declared at the start, before any other styles.

@import url('/path/to/more-styles.css');

When you use link rel="stylesheet" to apply styles, it’s a blocking request: the browser will fetch the style sheet before rendering the HTML. It needs to know how the HTML elements will be painted to the screen so there’s no point rendering the HTML until the CSS is parsed.

Embedded CSS

You can also place CSS rules inside a style element directly in the document. This is usually in the head of the document.

<style>
element {
    property: value;
}
</style>

When you embed CSS in the head of a document like this, there is no network request like there would be with external style sheets so there’s no render-blocking behaviour.

You can put any CSS inside the style element, which means that you could use embedded CSS to load external CSS using an @import statement (as long as that @import statement appears right at the start).

<style>
@import url('/path/to/more-styles.css');
element {
    property: value;
}
</style>

But then you’re back to having a network request.

Inline CSS

Using the style attribute you can apply CSS rules directly to an element. This is a universal attribute. It can be used on any HTML element. That doesn’t necessarily mean that the styles will work, but your markup is never invalidated by the presence of the style attribute.

<element style="property: value">
</element>

Whereas external CSS and embedded CSS don’t have any effect on specificity, inline styles are positively radioactive with specificity. Any styles applied this way are almost certain to over-ride any external or embedded styles.

You can also apply styles using JavaScript and the Document Object Model.

element.style.property = 'value';

Using the DOM style object this way is equivalent to inline styles. The radioactive specificity applies here too.

Style declarations specified in external style sheets or embedded CSS follow the rules of the cascade. Values can be over-ridden depending on the order they appear in. Combined with the separate-but-related rules for specificity, this can be very powerful. But if you don’t understand how the cascade and specificity work then the results can be unexpected, leading to frustration. In that situation, inline styles look very appealing—there’s no cascade and everything has equal specificity. But using inline styles means foregoing a lot of power—you’d be ditching the C in CSS.

A common technique for web performance is to favour embedded CSS over external CSS in order to avoid the extra network request (at least for the first visit—there are clever techniques for caching an external style sheet once the HTML has already loaded). This is commonly referred to as inlining your CSS. But really it should be called embedding your CSS.

This language mix-up is not a hill I’m going to die on (that hill would be referring to blog posts as blogs) but I thought it was worth pointing out.

Bookshop

Back at the start of the (first) lockdown, I wrote about using my website as an outlet:

While you’re stuck inside, your website is not just a place you can go to, it’s a place you can control, a place you can maintain, a place you can tidy up, a place you can expand. Most of all, it’s a place you can lose yourself in, even if it’s just for a little while.

Last week was eventful and stressful. For everyone. I found myself once again taking refuge in my website, tinkering with its inner workings in the way that someone else would potter about in their shed or take to their garage to strip down the engine of some automotive device.

Colly drew my attention to Bookshop.org, newly launched in the UK. It’s an umbrella website for independent bookshops to sell through. It’s also got an affiliate scheme, much like Amazon. I set up a Bookshop page for myself.

I’ve been tracking the books I’m reading for the past three years here on my own website. I set about reproducing that list on Bookshop.

It was exactly the kind of not-exactly-mindless but definitely-not-challenging task that was perfect for the state of my brain last week. Search for a book; find the ISBN number; paste that number into a form. It’s the kind of task that a real programmer would immediately set about automating but one that I embraced as a kind of menial task to keep me occupied.

I wasn’t able to get a one-to-one match between the list on my site and my reading list on Bookshop. Some titles aren’t available in the online catalogue. For example, the book I’m reading right now—A Paradise Built in Hell by Rebecca Solnit—is nowhere to be found, which seems like an odd omission.

But most of the books I’ve read are there on Bookshop.org, complete with pretty book covers. Then I decided to reverse the process of my menial task. I took all of the ISBN numbers from Bookshop and add them as machine tags to my reading notes here on my own website. Book cover images on Bookshop have predictable URLs that use the ISBN number (well, technically the EAN number, or ISBN-13, but let’s not go down a 927 rabbit hole here). So now I’m using that metadata to pull in images from Bookshop.org to illustrate my reading notes here on adactio.com.

I’m linking to the corresponding book on Bookshop.org using this URL structure:

https://uk.bookshop.org/a/{{ affiliate code }}/{{ ISBN number }}

I realised that I could also link to the corresponding entry on Open Library using this URL structure:

https://openlibrary.org/isbn/{{ ISBN number }}

Here, for example, is my note for The Raven Tower by Ann Leckie. That entry has a tag:

book:ean=9780356506999

With that information I can illustrate my note with this image:

https://images-eu.bookshop.org/product-images/images/9780356506999.jpg

I’m linking off to this URL on Bookshop.org:

https://uk.bookshop.org/a/980/9780356506999

And this URL on Open Library:

https://openlibrary.org/isbn/9780356506999

The end result is that my reading list now has more links and pretty pictures.

Oh, I also set up a couple of shorter lists on Bookshop.org:

The books listed in those are drawn from my end of the year round-ups when I try to pick one favourite non-fiction book and one favourite work of fiction (almost always speculative fiction). The books in those two lists are the ones that get two hearty thumbs up from me. If you click through to buy one of them, the price might not be as cheap as on Amazon, but you’ll be supporting an independent bookshop.

On this day

I’m in San Francisco to speak at An Event Apart, which kicks off tomorrow. But I arrived a few days early so that I could attend Indie Web Camp SF.

Yesterday was the discussion day. Most of the attendees were seasoned indie web campers, so quite a few of the discussions went deep on some of the building blocks. It was a good opportunity to step back and reappraise technology decisions.

Today is the day for making, tinkering, fiddling, and hacking. I had a few different ideas of what to do, mostly around showing additional context on my blog posts. I could, for instance, show related posts—other blog posts (or links) that have similar tags attached to them.

But I decided that a nice straightforward addition would be to show a kind of “on this day” context. After all, I’ve been writing blog posts here for eighteen years now; chances are that if I write a blog post on any given day, there will be something in the archives from that same day in previous years.

So that’s what I’ve done. I’ll be demoing it shortly here at Indie Web Camp, but you can see it in action now. If you look at the page for this blog post, you should see a section at the end with the heading “Previously on this day”. There you’ll see links to other posts I’ve written on December 8th in years gone by.

It’s quite a mixed bag. There’s a post about when I used to have a webcam from sixteen years ago. There’s a report from the Flash On The Beach conference from thirteen years ago (I wrote that post while I was in Berlin). And five years ago, I was writing about markup patterns for web components.

I don’t know if anyone other than me will find this feature interesting (but as it’s my website, I don’t really care). Personally, I find it fascinating to see how my writing has changed, both in terms of subject matter and tone.

Needless to say, the further back in time you go, the more chance there is that the links in my blog posts will no longer work. That’s a real shame. But then it’s a pleasant surprise when I find something that I linked to that is still online after all this time. And I can take comfort from the fact that if anyone has ever linked to anything I’ve written on my website, then those links still work.

Links, tags, and feeds

A little while back, I switched from using Chrome as my day-to-day browser to using Firefox. I could feel myself getting a bit too comfortable with one particular browser, and that’s not good. I reckon it’s good to shake things up a little every now and then. Besides, there really isn’t that much difference once you’ve transferred over bookmarks and cookies.

Unfortunately I’m being bitten by this little bug in Firefox. It causes some of my bookmarklets to fail on certain sites with strict Content Security Policies (and CSPs shouldn’t affect bookmarklets). I might have to switch back to Chrome because of this.

I use bookmarklets throughout the day. There’s the Huffduffer bookmarklet, of course, for whenever I come across a podcast episode or other piece of audio that I want to listen to later. But there’s also my own home-rolled bookmarklet for posting links to my site. It doesn’t do anything clever—it grabs the title and URL of the currently open page and pre-populates a form in a new window, leaving me to add a short description and some tags.

If you’re reading this, then you’re familiar with the “journal” section of adactio.com, but the “links” section is where I post the most. Here, for example, are all the links I posted yesterday. It varies from day to day, but there’s generally a handful.

Should you wish to keep track of everything I’m linking to, there’s a twitterbot you can follow called @adactioLinks. It uses a simple IFTTT recipe to poll my RSS feed of links and send out a tweet whenever there’s a new entry.

Or you can drink straight from the source and subscribe to the RSS feed itself, if you’re still rocking it old-school. But if RSS is your bag, then you might appreciate a way to filter those links…

All my links are tagged. Heavily. This is because all my links are “notes to future self”, and all my future self has to do is ask “what would past me have tagged that link with?” when I’m trying to find something I previously linked to. I end up using my site’s URLs as an interface:

At the front-end gatherings at Clearleft, I usually wrap up with a quick tour of whatever I’ve added that week to:

Well, each one of those tags also has a corresponding RSS feed:

…and so on.

That means you can subscribe to just the links tagged with something you’re interested in. Here’s the full list of tags if you’re interested in seeing the inside of my head.

This also works for my journal entries. If you’re only interested in my blog posts about frontend development, you might want to subscribe to:

Here are all the tags from my journal.

You can even mix them up. For everything I’ve tagged with “typography”—whether it’s links, journal entries, or articles—the URL is:

The corresponding RSS feed is:

You get the idea. Basically, if something on my site is a list of items, chances are there’s a corresponding RSS feeds. Sometimes there might even be a JSON feed. Hack some URLs to see.

Meanwhile, I’ll be linking, linking, linking…

An associative trail

Every now and then, I like to revisit Vannevar Bush’s classic article from the July 1945 edition of the Atlantic Monthly called As We May Think in which he describes a theoretical machine called the memex.

A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk.

1945! Apart from its analogue rather than digital nature, it’s a remarkably prescient vision. In particular, there’s the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Many decades later, Anne Washington ponders what a legal memex might look like:

My legal Memex builds a network of the people and laws available in the public records of politicians and organizations. The infrastructure for this vision relies on open data, free access to law, and instantaneously availability.

As John Sheridan from the UK’s National Archives points out, hypertext is the perfect medium for laws:

Despite the drafter’s best efforts to create a narrative structure that tells a story through the flow of provisions, legislation is intrinsically non-linear content. It positively lends itself to a hypertext based approach. The need for legislation to escape the confines of the printed form predates the all major innovators and innovations in hypertext, from Vannevar Bush’s vision in ” As We May Think“, to Ted Nelson’s coining of the term “hypertext”, through to and Berners-Lee’s breakthrough world wide web. I like to think that Nelson’s concept of transclusion was foreshadowed several decades earlier by the textual amendment (where one Act explicitly alters – inserts, omits or amends – the text of another Act, an approach introduced to UK legislation at the beginning of the 20th century).

That’s from a piece called Deeply Intertwingled Laws. The verb “to intertwingle” was another one of Ted Nelson’s neologisms.

There’s an associative trail from Vannevar Bush to Ted Nelson that takes some other interesting turns…

Picture a new American naval recruit in 1945, getting ready to ship out to the pacific to fight against the Japanese. Just as the ship as leaving the harbour, word comes through that the war is over. And so instead of fighting across the islands of the pacific, this young man finds himself in a hut on the Philippines, reading whatever is to hand. There’s a copy of The Atlantic Monthly, the one with an article called As We May Think. The sailor was Douglas Engelbart, and a few years later when he was deciding how he wanted to spend the rest of his life, that article led him to pursue the goal of augmenting human intellect. He gave the mother of all demos, featuring NLS, a working hypermedia system.

Later, thanks to Bill Atkinson, we’d get another system called Hypercard. It was advertised with the motto Freedom to Associate, in an advertising campaign that directly referenced Vannevar Bush.

And now I’m using the World Wide Web, a hypermedia system that takes in the whole planet, to create an associative trail. In this post, I’m linking (without asking anyone for permission) to six different sources, and in doing so, I’m creating a unique associative trail. And because this post has a URL (that won’t change), you are free to take it and make it part of your own associative trail on your digital memex.

Deep linking with fragmentions

When I was marking up Resilient Web Design I wanted to make sure that people could link to individual sections within a chapter. So I added IDs to all the headings. There’s no UI to expose that though—like the hover pattern that some sites use to show that something is linkable—so unless you know the IDs are there, there’s no way of getting at them other than “view source.”

But if you’re reading a passage in Resilient Web Design and you highlight some text, you’ll notice that the URL updates to include that text after a hash symbol. If that updated URL gets shared, then anyone following it should be sent straight to that string of text within the page. That’s fragmentions in action:

Fragmentions find the first matching word or phrase in a document and focus its closest surrounding element. The match is determined by the case-sensitive string following the # (or ## double-hash)

It’s a similar idea to Eric and Simon’s proposal to use CSS selectors as fragment identifiers, but using plain text instead. You can find out more about the genesis of fragmentions from Kevin. I’m using Jonathon Neal’s script with some handy updates from Matthew.

I’m using the fragmention support to power the index of the book. It relies on JavaScript to work though, so Matthew has come to the rescue again and created a version of the site with IDs for each item linked from the index (I must get around to merging that).

The fragmention functionality is ticking along nicely with one problem…

I’ve tweaked the typography of Resilient Web Design to within an inch of its life, including a crude but effective technique to avoid widowed words at the end of a paragraph. The last two words of every paragraph are separated by a UTF-8 no-break space character instead of a regular space.

That solves the widowed words problem, but it confuses the fragmention script. Any selected text that includes the last two words of a paragraph fails to match. I’ve tried tweaking the script, but I’m stumped. If you fancy having a go, please have at it.

Update: And fixed! Thanks to Lee.

Relinkification

On Jessica’s recommendation, I read a piece on the Guardian website called The eeriness of the English countryside:

Writers and artists have long been fascinated by the idea of an English eerie – ‘the skull beneath the skin of the countryside’. But for a new generation this has nothing to do with hokey supernaturalism – it’s a cultural and political response to contemporary crises and fears

I liked it a lot. One of the reasons I liked it was not just for the text of the writing, but the hypertext of the writing. Throughout the piece there are links off to other articles, books, and blogs. For me, this enriches the piece and it set me off down some rabbit holes of hyperlinks with fascinating follow-ups waiting at the other end.

Back in 2010, Scott Rosenberg wrote a series of three articles over the course of two months called In Defense of Hyperlinks:

  1. Nick Carr, hypertext and delinkification,
  2. Money changes everything, and
  3. In links we trust.

They’re all well worth reading. The whole thing was kicked off with a well-rounded debunking of Nicholas Carr’s claim that hyperlinks harm text. Instead, Rosenberg finds that hyperlinks within a text embiggen the writing …providing they’re done well:

I see links as primarily additive and creative. Even if it took me a little longer to read the text-with-links, even if I had to work a bit harder to get through it, I’d come out the other side with more meat and more juice.

Links, you see, do so much more than just whisk us from one Web page to another. They are not just textual tunnel-hops or narrative chutes-and-ladders. Links, properly used, don’t just pile one “And now this!” upon another. They tell us, “This relates to this, which relates to that.”

The difference between a piece of writing being part of the web and a piece of writing being merely on the web is something I talked about a few years back in a presentation called Paranormal Interactivity at ‘round about the 15 minute mark:

Imagine if you were to take away all the regular text and only left the hyperlinks on Wikipedia, you could still get the gist, right? Every single link there is like a wormhole to another part of this “choose your own adventure” game that we’re playing every day on the web. I love that. I love the way that Wikipedia uses links.

That ability of the humble hyperlink to join concepts together lies at the heart of Tim Berners Lee’s World Wide Web …and Ted Nelson’s Project Xanudu, and Douglas Engelbart’s Dynamic Knowledge Environments, and Vannevar Bush’s idea of the Memex. All of those previous visions of a hyperlinked world were—in many ways—superior to the web. But the web shipped. It shipped with brittle, one-way linking, but it shipped. And now today anyone can create a connection between two ideas by linking to resources that represent those ideas. All you need is an HTML document that contains some A elements with href attributes, and a URL to act as that document’s address.

Like the one you’re accessing now.

Not only can I link to that article on the Guardian’s website, I can also pair it up with other related links, like Warren Ellis’s talk from dConstruct 2014:

Inventing the next twenty years, strategic foresight, fictional futurism and English rural magic: Warren Ellis attempts to convince you that they are all pretty much the same thing, and why it was very important that some people used to stalk around village hedgerows at night wearing iron goggles.

There is definitely the same feeling of “the eeriness of the English countryside” in Warren’s talk. If you haven’t listened to it yet, set aside some time. It is enticing and disquieting in equal measure …like many of the works linked to from the piece on the Guardian.

There’s another link I’d like to make, and it happens to be to another dConstruct speaker.

From that Guardian piece:

Yet state surveillance is no longer testified to in the landscape by giant edifices. Instead it is mostly carried out in by software programs running on computers housed in ordinary-looking government buildings, its sources and effects – like all eerie phenomena – glimpsed but never confronted.

James Bridle has been confronting just that. His recent series The Nor took him on a tour of a parallel, obfuscated English countryside. He returned with three pieces of hypertext:

  1. All Cameras Are Police Cameras,
  2. Living in the Electromagnetic Spectrum, and
  3. Low Latency.

I love being able to do this. I love being able to add strands to this world-wide web of ours. Not only can I say “this idea reminds me of another idea”, but I can point to both ideas. It’s up to you whether you follow those links.

Fragmentions

Cennydd’s latest piece in A List Apart is the beautifully written Letter to a Junior Designer.

I really like the way that Cennydd emphasises the importance of being able to explain the reasoning behind your design decisions:

If you haven’t already, sometime in your career you’ll meet an awkward sonofabitch who wants to know why every pixel is where you put it. You should be able to articulate an answer for that person—yes, for every pixel.

That reminds me of something I read fourteen(!) years ago that’s always stayed with me. In an interview in Digital Web magazine, Joshua Davis was asked “What would you say is beauty in design?” His answer:

Being able to justify every pixel.

Here’s a link to the direct quote …except that link probably won’t work for you. Not unless you’ve installed this Chrome extension.

What the hell am I talking about? Well, this is something that Kevin Marks has been working on following on from the recent W3C annotation workshop.

It’s called fragmentions and it builds on the work done by Eric and Simon. They proposed using CSS selectors as fragment identifiers. Kevin’s idea is to use the words within the text as anchor points (like an automatic Command+F):

To tell these apart from an id link, I suggest using a double hash - ## for the fragment, and then words that identify the text. For example:

http://epeus.blogspot.com/2003_02_01_archive.html##annotate+the+web

That link will work in your browser because of this script, which Kevin has added to his site. I may well add that script to this site too.

Fragmentions are a nice idea and—to bring it back to Cennydd’s point—nicely explained.

A matter of protocol

The web is made of sugar, spice and all things nice. On closer inspection, this is what most URLs on the web are made of:

protocol://domain/path
  1. The protocol—e.g. http—followed by a colon and two slashes (for which Sir Tim apologises).
  2. The domain—e.g. adactio.com or huffduffer.com.
  3. The path—e.g. /journal/tags/nerdiness or /js/global.js.

(I’m leaving out the whole messy business of port numbers—which can be appended to the domain with a colon—because just about everything on the web is served over the default port 80.)

Most URLs on the web are either written in full as absolute URLs:

a href="http://adactio.com/journal/tags/nerdiness"
script src="https://huffduffer.com/js/global.js"

Or else they’re written out relative to the domain, like this:

a href="/journal/tags/nerdiness"
script src="/js/global.js"

It turns out that URLs can not only be written relative to the linking document’s domain, but they can also be written relative to the linking document’s protocol:

a href="//adactio.com/journal/tags/nerdiness"
script src="//huffduffer.com/js/global.js"

If the linking document is being served over HTTP, then those URLs will point to http://adactio.com/journal/tags/nerdiness and https://huffduffer.com/js/global.js but if the linking document is being served over HTTP Secure, the URLs resolve to https://adactio.com/journal/tags/nerdiness and https://huffduffer.com/js/global.js.

Writing the src attribute relative to the linking document’s protocol is something that Remy is already doing with his :

<!--[if lt IE 9]>
<script src="//html5shim.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->

If you have a site that is served over both and , and you’re linking to a -hosted JavaScript library—something I highly recommend—then you should probably get in the habit of writing protocol-relative URLs:

<script src="//ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js">
</script>

This is something that HTML5 Boilerplate does by default. HTML5 Boilerplate really is a great collection of fantastically useful tips and tricks …all wrapped in a terrible, terrible name.