Journal tags: edge

14

Who knows?

I love it when I come across some bit of CSS I’ve never heard of before.

Take this article on the text-emphasis property.

“The what property?”, I hear you ask. That was my reaction too. But look, it’s totally a thing.

Or take this article by David Bushell called CSS Button Styles You Might Not Know.

Sure enough, halfway through the article David starts talking about styling the button in an input type="file” using the ::file-selector-button pseudo-element:

All modern browsers support it. I had no idea myself until recently.

He’s right!

Then I remembered that I’ve got a file upload input in the form I use for posting my notes here on adactio.com (in case I want to add a photo). I immediately opened up my style sheet, eager to use this new-to-me bit of CSS.

I found the bit where I style buttons and this is the selector I saw:

button,
input[type="submit"],
::file-selector-button

Huh. I guess I did know about that pseudo-element after all. Clearly the knowledge exited my brain shortly afterwards.

There’s that tautological cryptic saying, “You don’t know what you don’t know.” But I don’t even know what I do know!

Knowing

There’s a repeated catchphrase used throughout Christopher Nolan’s film Tenet: ignorance is our ammunition.

There are certainly situations where knowledge is regrettable. The somewhat-silly thought experiment of Roko’s basilisk is one example. Once you have knowledge of it, you can’t un-know it, and so you become complicit.

Or, to use another example, I think it was Jason who told me that if you want to make someone’s life miserable, just teach them about typography. Then they’ll see all the terrible kerning out there in the world and they won’t be able to un-see it.

I sometimes wish I could un-learn all I’ve learned about cryptobollocks (I realise that the term “cryptocurrency” is the more widely-used phrase, but it’s so inaccurate I’d rather use a clearer term).

I sometimes wish I could go back to having the same understanding of cryptobollocks as most people: some weird new-fangled technology thing that has something to do with “the blockchain.”

But I delved too deep. I wanted to figure out why seemingly-smart people were getting breathlessly excited about something that sounds fairly ludicrous. Yet the more I learned, the more ludicrous it became. Bitcoin and its ilk are even worse than the occassional headlines and horror stories would have you believe.

As Jules says:

The reason I have such a visceral reaction to crypto projects isn’t just that they’re irresponsibly designed and usually don’t achieve what they promise. It’s also that the thing they promise sounds like a fucking nightmare.

Or, as Simon responded to someone wondering why there was so much crypto hate:

We hate it because we understand it.

I have yet to encounter a crypto project that isn’t a Ponzi scheme. I don’t mean like a Ponzi scheme. I mean they’re literally Ponzi schemes: zero-sum racing to the bottom built entirely on the greater fool theory. The only difference between traditional Ponzi schemes and those built on crypto is that crypto isn’t regulated. Yet.

I recently read The Glass Hotel by Emily St. John Mandel, a novel with the collapse of a Ponzi scheme at its heart. In the aftermath of the scheme’s collapse, there are inevitable questions like “How could you not know?” The narrator answers that question:

It’s possible to both know and not know something.

I’ve been thinking about that a lot.

Clearleft recently took on a project that involves cryptobollocks. Just to be clear, the client is not a fly-by-night crypto startup. This is an established financial institution. It’s not like Mike’s shocking decision to join Kraken of all places.

But in some ways, the fact that this is a respected company almost makes it worse. It legitimises cryptobollocks. It makes it more likely for “regular” folk to get involved (and scammed).

Every Thursday we have an end-of-week meeting and get a summary of how various projects are going. Every time there’s an update about the cryptobollocks project, my heart sinks. By all accounts, the project is going well. That means smart and talented people are using the power of design to make the world a little bit worse.

What will the metrics of success be for this project? Will success be measured by an increase in the amount of Bitcoin trading? I find it hard to see how that can possibly be called successful.

And I haven’t even mentioned the environmental impact of proof-of-work.

Right now, Clearleft is in the process of trying to become a B corp. It’s a long process that involves a lot of box-ticking to demonstrate a genuine care for the environment. There’s no checkbox about cryptobollocks. And yet the fact that we might enable even a few transactions on a proof-of-work blockchain makes a complete mockery of all of our sustainability initiatives.

This is why I wish I could un-know what I know. I wish I could just hear the project updates and say, “Crypto? Don’t know much about it.” But I can’t.

For seventeen years, I’ve felt nothing but pride in the work that Clearleft has done. I’d happily talk about any one of the case studies we’ve worked on. Even on projects that didn’t pan out as expected, or that had all sorts of tricky complications, the work has always been second-to-none. To quote the Agile prime directive:

Everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.

Now, for the first time, I can’t get past that phrase “what they knew at the time.” On the one hand, I’m sure that when they started this project, none of my colleagues knew quite how damaging cryptobollocks is. On the other hand, the longer the project goes on, the harder it is to maintain that position.

It’s possible to both know and not know something.

This is a no-win situation. If the project goes badly, that’s not good for Clearleft or the client. But if the project goes well, that’s not good for the world.

There’s probably not much I can do about this particular project at this point. But I can at least try to make sure that Clearleft doesn’t take on work like this again.

2.5.6

The Competition and Markets Authority (CMA) recently published an interim report on their mobile ecosystems market study. It’s well worth reading, especially the section on competition in the supply of mobile browsers:

On iOS devices, Apple bans the use of alternative browser engines – this means that Apple has a monopoly over the supply of browser engines on iOS. It also chooses not to implement – or substantially delays – a wide range of features in its browser engine. This restriction has 2 main effects:

  • limiting rival browsers’ ability to differentiate themselves from Safari on factors such as speed and functionality, meaning that Safari faces less competition from other browsers than it otherwise could do; and
  • limiting the functionality of web apps – which could be an alternative to native apps as a means for mobile device users to access online content – and thereby limits the constraint from web apps on native apps. We have not seen compelling evidence that suggests Apple’s ban on alternative browser engines is justified on security grounds.

That last sentence is a wonderful example of British understatement. Far from protecting end users from security exploits, Apple have exposed everyone on iOS to all of the security issues of Apple’s Safari browser (regardless of what brower the user thinks they are using).

The CMA are soliciting responses to their interim report:

To respond to this consultation, please email or post your submission to:

Email: mobileecosystems@cma.gov.uk

Post: 


Mobile Ecosystems Market Study
Competition and Markets Authority

25 Cabot Square

London

E14 4QZ

Please respond by no later than 5pm GMT on 7 February 2022.

I encourage you to send a response before this coming Monday. This is the email I’ve sent.

Hello,

This response is regarding competition in the supply of mobile browsers and contains no confidential information.

I read your interim report with great interest.

As a web developer and the co-founder of a digital design agency, I could cite many reasons why Apple’s moratorium on rival browser engines is bad for business. But the main reason I am writing to you is as a consumer and a user of Apple’s products.

I own two Apple computing devices: a laptop and a phone. On both devices, I can install apps from Apple’s App Store. But on my laptop I also have the option to download and install an application from elsewhere. I can’t do this on my phone. That would be fine if my needs were met by what’s available in the app store. But clause 2.5.6 of Apple’s app store policy restricts what is available to me as a consumer.

On my laptop I can download and install Mozilla’s Firefox or Google’s Chrome browsers. On my phone, I can install something called Firefox and something called Chrome. But under the hood, they are little more than skinned versions of Safari. I’m only aware of this because I’m au fait with the situation. Most of my fellow consumers have no idea that when they install the app called Firefox or the app called Chrome from the app store on their phone, they are being deceived.

It is this deception that bothers me most.

Kind regards,

Jeremy Keith

To be fair to Apple, this deception requires collusion from Mozilla, Google, Microsoft, and other browser makers. Nobody’s putting a gun to their heads and forcing them to ship skinned versions of Safari that bear only cosmetic resemblance to their actual products.

But of course it would be commercially unwise to forego the app store as a distrubution channel, even if the only features they can ship are superficial ones like bookmark syncing.

Still, imagine what would happen if Mozilla, Google, and Microsoft put their monies where their mouths are. Instead of just complaining about the unjust situation, what if they actually took the financial hit and pulled their faux-browsers from the iOS app store?

If this unjustice is as important as representatives from Google, Microsoft, and Mozilla claim it is, then righteous indignation isn’t enough. Principles without sacrifice are easy.

If nothing else, it would throw the real situation into light and clear up the misconception that there is any browser choice on iOS.

I know it’s not going to happen. I also know I’m being a hypocrite by continuing to use Apple products in spite of the blatant misuse of monopoly power on display. But still, I wanted to plant that seed. What if Microsoft, Google, and Mozilla were the ones who walk away from Omelas.

Browsers

I mentioned recently that there might be quite a difference in tone between my links and my journal here on my website:

’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.

My journal entries have been even more specifically negative of late. I’ve been bitchin’ and moanin’ about web browsers. But at least I’m an equal-opportunities bitcher and moaner.

I wish my journal weren’t so negative, but my mithering behaviour has been been encouraged. On more than one occasion, someone I know at a browser company has taken me aside to let me know that I should blog about any complaints I might have with their browser. It sounds counterintuitive, I know. But these blog posts can give engineers some ammunition to get those issues prioritised and fixed.

So my message to you is this: if there’s something about a web browser that you’re not happy with (or, indeed, if there’s something you’re really happy with), take the time to write it down and publish it.

Publish it on your website. You could post your gripes on Twitter but whinging on Jack’s website is just pissing in the wind. And I suspect you also might put a bit more thought into a blog post on your own site.

I know it’s a cliché to say that browser makers want to hear from developers—and I’m often cynical about it myself—but they really do want to know what we think. Share your thoughts. I’ll probably end up linking to what you write.

Upgrades and polyfills

I started getting some emails recently from people having issues using The Session. The issues sounded similar—an interactive component that wasn’t, well …interacting.

When I asked what device or browser they were using, the answer came back the same: Safari on iPad. But not a new iPad. These were older iPads running older operating systems.

Now, remember, even if I wanted to recommend that they use a different browser, that’s not an option:

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to.

It gets worse. Not only is there no choice when it comes to rendering engines on iOS, but the rendering engine is also tied to the operating system.

If you’re on an old Apple laptop, you can at least install an up-to-date version of Firefox or Chrome. But you can’t install an up-to-date version of Safari. An up-to-date version of Safari requires an up-to-date version of the operating system.

It’s the same on iOS devices—you can’t install a newer version of Safari without installing a newer version of iOS. But unlike the laptop scenario, you can’t install any version of Firefox of Chrome.

It’s disgraceful.

It’s particularly frustrating when an older device can’t upgrade its operating system. Upgrades for Operating system generally have some hardware requirements. If your device doesn’t meet those requirements, you can’t upgrade your operating system. That wouldn’t matter so much except for the Safari issue. Without an upgraded operating system, your web browsing experience stagnates unnecessarily.

For want of a nail

  • A website feature isn’t working so
  • you need to upgrade your browser which means
  • you need to upgrade your operating sytem but
  • you can’t upgrade your operating system so
  • you need to buy a new device.

Apple doesn’t allow other browsers to be installed on iOS devices so people have to buy new devices if they want to use the web. Handy for Apple. Bad for users. Really bad for the planet.

It’s particularly galling when it comes to iPads. Those are exactly the kind of casual-use devices that shouldn’t need to be caught in the wasteful cycle of being used for a while before getting thrown away. I mean, I get why you might want to have a relatively modern phone—a device that’s constantly with you that you use all the time—but an iPad is the perfect device to just have lying around. You shouldn’t feel pressured to have the latest model if the older version still does the job:

An older tablet makes a great tableside companion in your living room, an effective e-book reader, or a light-duty device for reading mail or checking your favorite websites.

Hang on, though. There’s another angle to this. Why should a website demand an up-to-date browser? If the website has been built using the tried and tested approach of progressive enhancement, then everyone should be able to achieve their goals regardless of what browser or device or operating system they’re using.

On The Session, I’m using progressive enhancement and feature detection everywhere I can. If, for example, I’ve got some JavaScript that’s going to use querySelectorAll and addEventListener, I’ll first test that those methods are available.

if (!document.querySelectorAll || !window.addEventListener) {
  // doesn't cut the mustard.
  return;
}

I try not to assume that anything is supported. So why was I getting emails from people with older iPads describing an interaction that wasn’t working? A JavaScript error was being thrown somewhere and—because of JavaScript’s brittle error-handling—that was causing all the subsequent JavaScript to fail.

I tracked the problem down to a function that was using some DOM methods—matches and closest—as well as the relatively recent JavaScript forEach method. But I had polyfills in place for all of those. Here’s the polyfill I’m using for matches and closest. And here’s the polyfill I’m using for forEach.

Then I spotted the problem. I was using forEach to loop through the results of querySelectorAll. But the polyfill works on arrays. Technically, the output of querySelectorAll isn’t an array. It looks like an array, it quacks like an array, but it’s actually a node list.

So I added this polyfill from Chris Ferdinandi.

That did the trick. I checked with the people with those older iPads and everything is now working just fine.

For the record, here’s the small collection of polyfills I’m using. Polyfills are supposed to be temporary. At some stage, as everyone upgrades their browsers, I should be able to remove them. But as long as some people are stuck with using an older browser, I have to keep those polyfills around.

I wish that Apple would allow other rendering engines to be installed on iOS devices. But if that’s a hell-freezing-over prospect, I wish that Safari updates weren’t tied to operating system updates.

Apple may argue that their browser rendering engine and their operating system are deeply intertwingled. That line of defence worked out great for Microsoft in the ‘90s.

Web browsers on iOS

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to. The app store doesn’t allow other browsers to be listed. The apps called Chrome and Firefox are little more than skinned versions of Safari.

If you’re a web developer, there are two possible reactions to hearing this. One is “Duh! Everyone knows that!”. The other is “What‽ I never knew that!”

If you fall into the first category, I’m guessing you’ve been a web developer for a while. The fact that Safari is the only browser on iOS devices is something you’ve known for years, and something you assume everyone else knows. It’s common knowledge, right?

But if you’re relatively new to web development—heck, if you’ve been doing web development for half a decade—you might fall into the second category. After all, why would anyone tell you that Safari is the only browser on iOS? It’s common knowledge, right?

So that’s the situation. Safari is the only browser that can run on iOS. The obvious follow-on question is: why?

Apple at this point will respond with something about safety and security, which are certainly important priorities. So let me rephrase the question: why on iOS?

Why can I install Chrome or Firefox or Edge on my Macbook running macOS? If there are safety or security reasons for preventing me from installing those browsers on my iOS device, why don’t those same concerns apply to my macOS device?

At one time, the mobile operating system—iOS—was quite different to the desktop operating system—OS X. Over time the gap has narrowed. At this point, the operating systems are converging. That makes sense. An iPhone, an iPad, and a Macbook aren’t all that different apart from the form factor. It makes sense that computing devices from the same company would share an underlying operating system.

As this convergence continues, the browser question is going to have to be decided in one direction or the other. As it is, Apple’s laptops and desktops strongly encourage you to install software from their app store, though it is still possible to install software by other means. Perhaps they’ll decide that their laptops and desktops should only be able to install software from their app store—a decision they could justify with safety and security concerns.

Imagine that situation. You buy a computer. It comes with one web browser pre-installed. You can’t install a different web browser on your computer.

You wouldn’t stand for it! I mean, Microsoft got fined for anti-competitive behaviour when they pre-bundled their web browser with Windows back in the 90s. You could still install other browsers, but just the act of pre-bundling was seen as an abuse of power. Imagine if Windows never allowed you to install Netscape Navigator?

And yet that’s exactly the situation in 2020.

You buy a computing device from Apple. It might be a Macbook. It might be an iPad. It might be an iPhone. But you can only install your choice of web browser on one of those devices. For now.

It is contradictory. It is hypocritical. It is indefensible.

Unity

It’s official. Microsoft’s Edge browser is running on the Blink rendering engine and it’s available now.

Just over a year ago, I wrote about my feelings on this decision:

I’m sure the decision makes sound business sense for Microsoft, but it’s not good for the health of the web.

The importance of browser engine diversity is beautifully illustrated (literally) in Rachel’s The Ecological Impact of Browser Diversity.

But I was chatting to Amber the other day, and I mentioned how I can see the theoretical justification for Microsoft’s decision …even if I don’t quite buy it myself.

Picture, if you will, something I’ll call the bar of unity. It’s a measurement of how much collaboration is happening between browser makers.

In the early days of the web, the bar of unity was very low indeed. The two main browser vendors—Microsoft and Netscape—not only weren’t collaborating, they were actively splintering the languages of the web. One of them would invent a new HTML element, and the other would invent a completely different element to do the same thing (remember abbr and acronym). One of them would come up with one model for interacting with a document through JavaScript, and the other would come up with a completely different model to the same thing (remember document.all and document.layers).

There wasn’t enough collaboration. Our collective anger at this situation led directly to the creation of The Web Standards Project.

Eventually, those companies did start collaborating on standards at the W3C. The bar of unity was raised.

This has been the situation for most of the web’s history. Different browser makers agreed on standards, but went their own separate ways on implementation. That’s where they drew the line.

Now that line is being redrawn. The bar of unity is being raised. Now, a number of separate browser makers—Google, Samsung, Microsoft—not only collaborate on standards but also on implementation, sharing a codebase.

The bar of unity isn’t right at the top. Browsers can still differentiate in their user interfaces. Edge, for example, can—and does—offer very sensible defaults for blocking trackers. That’s much harder for Chrome to do, given that Google are amongst the worst offenders.

So these browsers are still competing, but the competition is no longer happening at the level of the rendering engine.

I can see how this looks like a positive development. In fact, from this point of view, Mozilla are getting in the way of progress by having a separate codebase (yes, this is a genuinely-held opinion by some people).

On the face of it, more unity sounds good. It sounds like more collaboration. More cooperation.

But then I think of situations where complete unity isn’t necessarily a good thing. Take political systems, for example. If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!

There’s a sweet spot somewhere in between where there’s a base of level of agreement and cooperation, but there’s also plenty of room for disagreement and opposition. Right now, the browser landscape is just about still in that sweet spot. It’s like a two-party system where one party has a crushing majority. Checks and balances exist, but they’re in peril.

Firefox is one of the last remaining representatives offering an alternative. The least we can do is support it.

Browsers

Microsoft’s Edge browser is going to switch its rendering engine over to Chromium.

I am deflated and disappointed.

There’s just no sugar-coating this. I’m sure the decision makes sound business sense for Microsoft, but it’s not good for the health of the web.

Very soon, the vast majority of browsers will have an engine that’s either Blink or its cousin, WebKit. That may seem like good news for developers when it comes to testing, but trust me, it’s a sucky situation of innovation and agreement. Instead of a diverse browser ecosystem, we’re going to end up with incest and inbreeding.

There’s one shining exception though. Firefox. That browser was originally created to combat the seemingly unstoppable monopolistic power of Internet Explorer. Now that Microsoft are no longer in the rendering engine game, Firefox is once again the only thing standing in the way of a complete monopoly.

I’ve been using Firefox as my main browser for a while now, and I can heartily recommend it. You should try it (and maybe talk to your relatives about it at Christmas). At this point, which browser you use no longer feels like it’s just about personal choice—it feels part of something bigger; it’s about the shape of the web we want.

Jeffrey wrote that browser diversity starts with us:

The health of Firefox is critical now that Chromium will be the web’s de facto rendering engine.

Even if you love Chrome, adore Gmail, and live in Google Docs or Analytics, no single company, let alone a user-tracking advertising giant, should control the internet.

Andy Bell also writes about browser diversity:

I’ll say it bluntly: we must support Firefox. We can’t, as a community allow this browser engine monopoly. We must use Firefox as our main dev browsers; we must encourage our friends and families to use it, too.

Yes, it’s not perfect, nor are Mozilla, but we can help them to develop and grow by using Firefox and reporting issues that we find. If we just use and build for Chromium, which is looking likely (cough Internet Explorer monopoly cough), then Firefox will fall away and we will then have just one major engine left. I don’t ever want to see that.

Uncle Dave says:

If the idea of a Google-driven Web is of concern to you, then I’d encourage you to use Firefox. And don’t be a passive consumer; blog, tweet, and speak about its killer features. I’ll start: Firefox’s CSS Grid, Flexbox, and Variable Font tools are the best in the business.

Mozilla themselves came out all guns blazing when they said Goodbye, EdgeHTML:

Microsoft is officially giving up on an independent shared platform for the internet. By adopting Chromium, Microsoft hands over control of even more of online life to Google.

Tim describes the situation as risking a homogeneous web:

I don’t think Microsoft using Chromium is the end of the world, but it is another step down a slippery slope. It’s one more way of bolstering the influence Google currently has on the web.

We need Google to keep pushing the web forward. But it’s critical that we have other voices, with different viewpoints, to maintain some sense of balance. Monocultures don’t benefit anyone.

Andre Alves Garzia writes that while we Blink, we lose the web:

Losing engines is like losing languages. People may wish that everyone spoke the same language, they may claim it leads to easier understanding, but what people fail to consider is that this leads to losing all the culture and way of thought that that language produced. If you are a Web developer smiling and happy that Microsoft might be adopting Chrome, and this will make your work easier because it will be one less browser to test, don’t be! You’re trading convenience for diversity.

I like that analogy with language death. If you prefer biological analogies, it’s worth revisiting this fantastic post by Rachel back in August—before any of us knew about Microsoft’s decision—all about the ecological impact of browser diversity:

Let me be clear: an Internet that runs only on Chrome’s engine, Blink, and its offspring, is not the paradise we like to imagine it to be.

That post is a great history lesson, documenting how things can change, and how decisions can have far-reaching unintended consequences.

So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.

If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.

And it’s not likely to be replaced.

Acknowledgements

It feels a little strange to refer to Going Offline as “my” book. I may have written most of the words in it, but it was only thanks to the work of others that they ended up being the right words in the right order in the right format.

I’ve included acknowledgements in the book, but I thought it would be good to reproduce them here in the form of hypertext…

Everyone should experience the joy of working with Katel LeDû and Lisa Maria Martin. From the first discussions right up until the final last-minute tweaks, they were unflaggingly fun to collaborate with. Thank you, Katel, for turning my idea into reality. Thank you, Lisa Maria, for turning my initial mush of words into a far more coherent mush of words.

Jake Archibald and Amber Wilson were the best of technical editors. Jake literally wrote the spec on service workers so I knew I could rely on him to let me know whenever I made any factual missteps. Meanwhile Amber kept me on the straight and narrow, letting me know whenever the writing was becoming unclear. Thank you both for being so generous with your time.

Thanks to my fellow Clearlefty Danielle Huntrods for giving me feedback as the book developed.

Finally, I want to express my heartfelt thanks to everyone who has ever taken the time to write on their website about their experiences with service workers. Lyza Gardner, Ire Aderinokun, Una Kravets, Mariko Kosaka, Jason Grigsby, Ethan Marcotte, Mike Riethmuller, and others inspired me with their generosity. Thank you to everyone who’s making the web better through such kind acts of openness. To quote the original motto of the World Wide Web project, let’s share what we know.

Edge words

I really enjoyed last year’s Edge conference so I made sure not to miss this year’s event, which took place last weekend.

The format was a little different this time ‘round. Last year the whole day was taken up with panels. Now, panels are often rambling, cringeworthy affairs, but Edge Conf is one of the few events that does panels well: they’re run on a tight schedule and put together with lots of work in advance. At this year’s Edge, the morning was taken up with these tightly-run panels as usual, but the afternoon consisted of more Barcamp-like breakout sessions.

I’ve got to be honest: I don’t think the new format worked that well. The breakout sessions didn’t have the true flexibility that you get with an unconference schedule, so there was no opportunity to merge similarly-themed sessions. There was, for example, a session on components at the same time as a session on accessibility in web components.

That highlights the other issue: FOMO. I’m really not a fan of multi-track events; there were so many sessions that sounded really interesting, but I couldn’t clone myself and go to all of them at once.

But, like I said, the first half of the day was taken up with four sequential (rather than parallel) panels and they were all excellent. All of the moderators did a fantastic job, and I was fortunate enough to sit in on the progressive enhancement panel expertly moderated by Lyza.

The event is called Edge for a reason. There is a rarefied atmosphere—and not just because of the broken-down air conditioning. This is a room full of developers on the cutting edge of web development technologies. Being at Edge Conf means being in a bubble. And being in a bubble is absolutely fine as long as you’re aware you’re in a bubble. It would be problematic if anyone were to mistake the audience and the discussions at Edge as being in any way representative of typical working web devs.

One of the most insightful comments of the day came from Christian who said, “Yes, but this is Edge Conf.” You’re going to need some context for that quote, so here it is…

On the web components panel that Christian was moderating, Alex was making a point about the ubiquity of tools—”Tooling was save you”, he said—and he asked for a show of hands from the audience on who was not using some particular tooling technology; transpilers, package managers, build tools, I can’t remember the specific question. Nobody put their hand up. “See?” asked Alex. “Yes”, said Christian, “but this is Edge Conf.”

Now, while I wasn’t keen on the format of the afternoon with its multiple simultaneous breakout sessions, that doesn’t mean I didn’t enjoy the ones I plumped for. Quite the opposite. The last breakout session of the day, again expertly moderated by Lyza, was particularly great.

The discussion was all about progressive enhancement. There seemed to be a general consensus that we’re all 100% committed to the results of progressive enhancement—greater availability, wider reach, and better performance—but that the term itself is widely misunderstood as “making all of your functionality work even with JavaScript switched off”. This misunderstanding couldn’t be further from the truth:

  1. It’s not about making all of your functionality available; it’s making your core functionality available: everything else can be considered an enhancement and it’s perfectly fine if not everyone gets that enhancement.
  2. This isn’t about switching JavaScript off; it’s about any particular technology not being available for reasons we can’t foresee (network issues, browser issues, whatever it may be).

And yet the misunderstanding persists. For that reason, most of the people in the discussion at Edge Conf were in favour of simply dropping the term progressive enhancement and instead focusing on terms like availability and access. Tim writes:

I’m not sure what we call it now. Maybe we do need another term to get people to move away from the “progressive enhancement = working without JS” baggage that distracts from the real goal.

And Stuart writes:

So I’m not going to be talking about progressive enhancement any more. I’m going to be talking about availability. About reach. About my web apps being for everyone even when the universe tries to get in the way.

But Jason writes:

I completely disagree that we should change nomenclature because there exists some small segment of Web designers unwilling to expand their development toolbox. I think progressive enhancement—the term—remains useful, descriptive, and appropriate.

I’m torn. On the one hand, I agree with Jason. The term “progressive enhancement” is a great descriptor. But on the other hand, I don’t want to end up like that guy who’s made it his life’s work to change every instance of the phrase “comprises of” to “comprises” (or “consists of”) on Wikipedia. Technically, he’s correct. But it doesn’t sound like a fun way to spend your days.

I guess my worry is, if I write an article or give a presentation, and I title it something to do with progressive enhancement, am I going to alienate and put off the very audience I’m trying to reach? But if I title it something else, am I tricking people?

Words are hard.

100 words 097

It’s the weekend …and I got up at the crack of dawn to head to London. Yes, on this beautiful sunny day, I elected to take the commuter train up to the big city to spend the day trapped inside a building where the air conditioning crapped out. Sweaty!

But it was worth it. I was at the Edge conference, which is always an intense dose of condensed nerdery. This year I participated in one of the panels: a discussion on progressive enhancement expertly moderated by Lyza. She also led a break-out session on the same topic later on.

100 words 007

I’m staying at the Edgewater hotel in Seattle, an unusual structure that is literally on the water, giving it a nautical atmosphere. The views out on the Puget Sound are quite lovely.

Inside, the hotel has more of a Twin Peaks vibe. It feels less like a hotel and more like a lodge.

The hotel is clearly proud of the many rock stars it has hosted over the years. As you settle into your cosy room, you can imagine what it was like when the Beatles were fishing from their balcony, or Led Zeppelin were doing unspeakable things with mudsharks.

Notes from the edge

I went up to London for the Edge Conference on Friday. It’s not your typical conference. Instead of talks, there are panels, but not the crap kind, where nobody says anything of interest: these panels are ruthlessly curated and prepared. There’s lots of audience interaction too, but again, not the crap kind, where one or two people dominate the discussion with their own pet topics: questions are submitted ahead of time, and then you are called upon to ask it at the right moment. It’s like Question Time for the web.

Components

The first panel was on that hottest of topics: Web Components. Peter Gasston kicked it off with a superb introduction to the subject. Have a read of his equally-excellent article in Smashing Magazine to get the gist.

Needless to say, this panel covered similar ground to the TAG meetup I attended a little while back, and left me with similar feelings: I’m equal parts excited and nervous; optimistic and worried. If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once. And I think that’ll be (mostly) okay.

I butted into the discussion when the topic of accessibility came up. I was a little worried about what I was hearing, which was mainly, “Oh, ARIA takes care of the accesibility.” I felt like Web Components were passing the buck to ARIA, which would be fine if it weren’t for the fact that ARIA can’t cover all the possible use-cases of Web Components.

I chatted about this with Derek and Nicole during the break, but I’m not sure if I was articulating my thoughts very well, so I’ll have another stab at it here:

Let me set the scene for Web Components…

Historically, HTML has had a limited vocubalary for expressing interface widgets—mostly a bunch of specialised form fields like, say, the select element. The plus side is that there’s a consensus of understanding among the browsers, so you don’t have to explain what a select element does; the browsers already know. The downside is that whenever we want to add a new interface element like input type="range", it takes time to get into browsers and through the standards process. Web Components allow you to conjure up interface elements, and you don’t have to lobby browser makers or standards groups in order to make browsers understand your newly-minted element: you provide all the behavioural and styling instructions in one bundle.

So Web Components make use of HTML, JavaScript, and (scoped) CSS. The possibility space for the HTML is infinite: if you need an element that doesn’t exist, you just invent it. The possibility space for the JavaScript is pretty close to infinite: it’s a Turing-complete language that can be wrangled to do just about anything. The possibility space for CSS isn’t infinite, but it’s pretty darn big: there’s not much you can’t do with it at this point.

What’s missing from that bundle of HTML, JavaScript, and CSS are hooks for assistive technology. Up until now, this is something we’ve mostly left to the browser. We don’t have to include any hooks for assistive technology when we use a select element because the browser knows what it is and can expose that knowledge to the assistive technology. If we’re going to start making up our own interface elements, we now have to take on the responsibility of providing that information to assistive technology.

How do we that? Well, right now, our only option is to use ARIA …but the possibility space defined by ARIA is much, much smaller than HTML, JavaScript, or CSS.

That’s not a criticism of ARIA: that’s the way it was designed. It’s a reactionary technology, designed to plug the gaps where the native semantics of HTML just don’t cut it. The vocabulary of ARIA was created by looking at the kinds of interface elements people are making—tabs, sliders, and so on. That’s fine, but it can’t scale to keep pace with Web Components.

The problem that Web Components solve—the fact that it currently takes too long to get a new interface element into browsers—doesn’t have a corresponding solution when it comes to accessibility hooks. Just adding more and more predefined ARIA roles won’t cut it—we need some kind of extensible accessibility that matches the expressive power of Web Components. We don’t need a bigger vocabulary in ARIA, we need a way to define our own vocabulary—an extensible ARIA, if you will.

Hmmm… I’m still not sure I’m explaining myself very well.

Anyway, I just want to make sure that accessibility doesn’t get left behind (again!) in our rush to create a new solution to our current problems. With Web Components still in their infancy, this feels like the right time to raise these concerns.

That highlights another issue, one that Nicole picked up on. It’s really important that the extensible web community and the accessibility community talk to each other.

Frankly, the accessibility community can be its own worst enemy sometimes. So don’t get me wrong: I’m not bringing up my concerns about the accessibility of Web Components in order to cry “fail!”—I just want to make sure that it’s on the table (and I’m glad that Alex is one of the people driving Web Components—his history with Dojo reassures me that we can push the boundaries of interface widgets on the web without leaving accessibility behind).

Anyway …that’s enough about that. I haven’t mentioned all the other great discussions that took place at Edge Conference.

Developer Tooling

The Web Components panel was followed by a panel on developer tools. This was dominated by representatives from different browsers, each touting their own set of in-browser tools. But the person who I really wanted to rally behind was Kenneth Auchenberg. He quite rightly asks why our developer tools and our text editors are two different apps. And rather than try to put text editors into developer tools, what we really want is to pull developer tools into our text editors …all the developer tools from all the browsers, not just one set of developer tools from one specific browser.

If you haven’t seen Kenneth’s presentation from Full Frontal, I urge you to watch it or listen to it.

I had my hand up to jump into the discussion towards the end, but time ran out so I didn’t get a chance. Paul came over afterwards and asked what I was going to say. Here’s what I told him…

I’m fascinated by the social dynamics around how browsers get made. This is an area where different companies are simultaneously collaborating and competing.

Broadly speaking, the feature set of a web browser can be divided into two buckets:

In one bucket, you’ve got the support for standards like HTML, CSS, JavaScript. Now, individual browsers might compete on how quickly or how thoroughly they get those standards implemented, but at this point, there’s no disagreement about the fact that proprietary crap is bad, standards are good, and that no matter how painful the process can be, browser makers all need to get together and work on standards together. Heck, even Apple can’t avoid collaborating on this stuff.

In the other bucket, you’ve got all the stuff that browsers compete against each other with: speed, security, the user interface, etc. A lot of this takes place behind closed doors, and that’s fine. There’s no real need for browser makers to collaborate on this stuff, and it could even hurt their competetive advantage if they did collaborate.

But here’s the problem; developer tools seem to be coming out of that second bucket instead of the first. There doesn’t seem to be much communication between the browser makers on developer tools. That’s fine if you see developer tools as an opportunity for competition, but it’s lousy if you see developer tools as an opportunity for interoperability.

This is why Kenneth’s work is so important. He’s crying out for more interoperability between browsers when it comes to developer tools. Why can’t they all use the same low-level APIs under the hood? Then they can still compete on how pretty their dev tools look, without making life miserable for developers who want to move quickly between browsers.

As painful as it might be, I think that browser makers should get together in some semi-formalised way to standardise this stuff. I don’t think that the W3C or the WHATWG are necessarily the right places for this kind of standardisation, but any kind of official cooperation would be good.

Build Process

The panel on build processes for front-end development kicked off with Gareth saying a few words. Some of those words included the sentence:

Make is probably older than you.

Cue glares from me and Scott.

Gareth also said that making websites means making software. We’re all making software—live with it.

This made me nervous. I’ve always felt that one of the great strengths of the web has been its low barrier to entry. The idea of a web that can only be made by qualified software developers doesn’t sound like a good thing to me.

Fortunately, things got cleared up later on. Somebody else asked a question about whether the barrier to entry was being raised by the complexity of tools like preprocessors, compilers, and transpilers. The consensus of the panel was that these are power tools for power users. So if someone were learning to make a website from scratch, you wouldn’t start them off with, say, Sass, without first learning CSS.

It was a fun panel, made particulary enjoyable by the presence of Kyle Simpson. I like the cut of his jib. Alas, I didn’t get the chance to tell him that in person. I had to duck out of the afternoon’s panels to get back to Brighton due to unforeseen family circumstances. But I did manage to catch some of the later panels on the live stream.

Closing thoughts

A common thread I noticed amongst many of the panels was a strong bias for decantralisation, rather than collaboration. That was most evident with Web Components—the whole point is that you can make up your own particular solution rather than waiting for a standards body. But it was also evident in the Developer Tools line-up, where each browser maker is reinventing the same wheels. And when it came to Build Process, it struck me that everyone is scratching their own itch instead of getting together to work on an itch solution.

There’s nothing wrong with that kind of Darwinian approach to solving our problems, but it does seem a bit wasteful. Mairead Buchan was at Edge Conference too and she noticed the same trend. Sounds like she’s going to do something about it too.

Facing the future

There is much hand-wringing in the media about the impending death of journalism, usually blamed on the rise of the web or more specifically bloggers. I’m sympathetic to their plight, but sometimes journalists are their own worst enemy, especially when they publish badly-researched articles that fuel moral panic with little regard for facts (if you’ve ever been in a newspaper article yourself, you’ll know that you’re lucky if they manage to spell your name right).

Exhibit A: an article published in The Guardian called How I became a Foursquare cyberstalker. Actually, the article isn’t nearly as bad as the comments, which take ignorance and narrow-mindedness to a new level.

Fortunately Ben is on hand to set the record straight. He wrote Concerning Foursquare and communicating privacy. Far from being a lesser form of writing, this blog post is more accurate than the article it is referencing, helping to balance the situation with a different perspective …and a nice big dollop of facts and research. Ben is actually quite kind to The Guardian article but, in my opinion, his own piece is more interesting and thoughtful.

Exhibit B: an article by Jeffrey Rosen in The New York Times called The Web Means the End of Forgetting. That’s a bold title. It’s also completely unsupported by the contents of the article. The article contains anecdotes about people getting into trouble about something they put on the web, and—even though the consequences for that action played out in the present—he talks about the permanent memory bank of the Web and writes:

The fact that the Internet never seems to forget is threatening, at an almost existential level, our ability to control our identities.

Bollocks. Or, to use the terminology of Wikipedia, citation needed.

Scott Rosenberg provides the necessary slapdown, asking Does the Web remember too much — or too little?:

Rosen presents his premise — that information once posted to the Web is permanent and indelible — as a given. But it’s highly debatable. In the near future, we are, I’d argue, far more likely to find ourselves trying to cope with the opposite problem: the Web “forgets” far too easily.

Exactly! I get irate whenever I hear the truism that the web never forgets presented without any supporting data. It’s right up there with eskimos have fifty words for snow and people in the middle ages thought that the world was flat. These falsehoods are irritating at best. At worst, as is the case with the myth of the never-forgetting web, the lie is downright dangerous. As Rosenberg puts it:

I’m a lot less worried about the Web that never forgets than I am about the Web that can’t remember.

That’s a real problem. And yet there’s no moral panic about the very real threat that, once digitised, our culture could be in more danger of being destroyed. I guess that story doesn’t sell papers.

This problem has a number of thorns. At the most basic level, there’s the issue of . I love the fact that the web makes it so easy for people to publish anything they want. I love that anybody else can easily link to what has been published. I hope that the people doing the publishing consider the commitment they are making by putting a linkable resource on the web.

As I’ve said before, a big part of this problem lies with the DNS system:

Domain names aren’t bought, they are rented. Nobody owns domain names, except ICANN.

I’m not saying that we should ditch domain names. But there’s something fundamentally flawed about a system that thinks about domain names in time periods as short as a year or two.

Then there’s the fact that so much of our data is entrusted to third-party sites. There’s no guarantee that those third-party sites give a rat’s ass about the long-term future of our data. Quite the opposite. The callous destruction of Geocities by Yahoo is a testament to how little our hopes and dreams mean to a company concerned with the bottom line.

We can host our own data but that isn’t quite as easy as it should be. And even with the best of intentions, it’s possible to have the canonical copies wiped from the web by accident. I’m very happy to see services like Vaultpress come on the scene:

Your WordPress site or blog is your connection to the world. But hosting issues, server errors, and hackers can wipe out in seconds what took years to build. VaultPress is here to protect what’s most important to you.

The Internet Archive is also doing a great job but Brewster Kahle shouldn’t have to shoulder the entire burden. Dave Winer has written about the idea of future-safe archives:

We need one or more institutions that can manage electronic trusts over very long periods of time.

The institutions need to be long-lived and have the technical know-how to manage static archives. The organizations should need the service themselves, so they would be likely to advance the art over time. And the cost should be minimized, so that the most people could do it.

The Library of Congress has its Digital Preservation effort. Dan Gillmor reports on the recent three-day gathering of the institution’s partners:

It’s what my technology friends call a non-trivial task, for all kinds of technical, social and legal reasons. But it’s about as important for our future as anything I can imagine. We are creating vast amounts of information, and a lot of it is not just worth preserving but downright essential to save.

There’s an even longer-term problem with digital preservation. The very formats that we use to store our most treasured memories can become obsolete over time. This goes to the very heart of why standards such as HTML—the format I’m betting on—are so important.

Mark Pilgrim wrote about the problem of format obsolescence back in 2006. I found his experiences echoed more recently by Paul Glister, author of the superb Centauri Dreams, one of my favourite websites. He usually concerns himself with challenges on an even longer timescale, like the construction of a feasible means of interstellar travel but he gives a welcome long zoom perspective on digital preservation in Burying the Digital Genome, pointing to a project called PLANETS: Preservation and Long-term Access Through Networked Services.

Their plan involves the storage, not just of data, but of data formats such as JPEG and PDF: the equivalent of a Rosetta stone for our current age. A box containing format-decoding documentation has been buried in a bunker under the Swiss Alps. That’s a good start.

David Eagleman recently gave a talk for The Long Now Foundation entitled Six Easy Steps to Avert the Collapse of Civilization. Step two is Don’t lose things:

As proved by the destruction of the Alexandria Library and of the literature of Mayans and Minoans, “knowledge is hard won but easily lost.”

Long Now: Six Easy Steps to Avert the Collapse of Civilization on Huffduffer

I’m worried that we’re spending less and less time thinking about the long-term future of our data, our culture, and ultimately, our civilisation. Currently we are preoccupied with the real-time web: Twitter, Foursquare, Facebook …all services concerned with what’s happening right here, right now. The Long Now Foundation and Tau Zero Foundation offer a much-needed sense of perspective.

As with that other great challenge of our time—the alteration of our biosphere through climate change—the first step to confronting the destruction of our collective digital knowledge must be to think in terms greater than the local and the present.