Journal tags: mozilla

12

Ad tech

Back when South by Southwest wasn’t terrible, there used to be an annual panel called Browser Wars populated with representatives from the main browser vendors (except for Apple, obviously, who would never venture onto a stage outside of their own events).

I remember getting into a heated debate with the panelists during the 2010 edition. I was mad about web fonts.

Just to set the scene, web fonts didn’t exist back in 2010. That’s what I was mad about.

There was no technical reason why we couldn’t have web fonts. The reason why we didn’t get web fonts for years and years was because browser makers were concerned about piracy and type foundries.

That’s nice and all, but as I said during that panel, I don’t recall any such concerns being raised for photographers when the img element was shipped. Neither was the original text-only web held back by the legimate fear by writers of plagiarism.

My point was not that these concerns weren’t important, but that it wasn’t the job of web browsers to shore up existing business models. To use standards-speak, these concerns are orthogonal.

I’m reminded of this when I see browser makers shoring up the business of behavioural advertising.

I subscribe to the RSS feed of updates to Chrome. Not all of it is necessarily interesting to me, but all of it is supposedly aimed at developers. And yet, in amongst the posts about APIs and features, there’ll be something about the Orwellianly-titled “privacy sandbox”.

This is only of interest to one specific industry: behavioural online advertising driven by surveillance and tracking. I don’t see any similar efforts being made for teachers, cooks, architects, doctors or lawyers.

It’s a ludicrous situation that I put down to the fact that Google, the company that makes Chrome, is also the company that makes its money from targeted advertising.

But then Mozilla started with the same shit.

Now, it’s one thing to roll out a new so-called “feature” to benefit behavioural advertising. It’s quite another to make it enabled by default. That’s a piece of deceptive design that has no place in Firefox. Defaults matter. Browser makers know this. It’s no accident that this “feature” was enabled by default.

This disgusts me.

It disgusts me all the more that it’s all for nothing. Notice that I’ve repeatly referred to behavioural advertising. That’s the kind that relies on tracking and surveillance to work.

There is another kind of advertising. Contextual advertising is when you show an advertisement related to the content of the page the user is currently on. The advertiser doesn’t need to know anything about the user, just the topic of the page.

Conventional wisdom has it that behavioural advertising is much more effective than contextual advertising. After all, why would there be such a huge industry built on tracking and surveillance if it didn’t work? See, for example, this footnote by John Gruber:

So if contextual ads generate, say, one-tenth the revenue of targeted ads, Meta could show 10 times as many ads to users who opt out of targeting. I don’t think 10× is an outlandish multiplier there — given how remarkably profitable Meta’s advertising business is, it might even need to be higher than that.

Seems obvious, right?

But the idea that behavioural advertising works better than contextual advertising has no basis in reality.

If you think you know otherwise, Jon Bradshaw would like to hear from you:

Bradshaw challenges industry to provide proof that data-driven targeting actually makes advertising more effective – or in fact makes it worse. He’s spoiling for a debate – and has three deep, recent studies that show: broad reach beats targeting for incremental growth; that the cost of targeting outweighs the return; and that second and third party data does not outperform a random sample. First party data does beat the random sample – but contextual ads massively outperform even first party data. And they are much, much cheaper. Now, says Bradshaw, let’s see some counter-evidence from those making a killing.

If targeted advertising is going to get preferential treatment from browser makers, I too would like to see some evidence that it actually works.

Further reading:

Browsers

I mentioned recently that there might be quite a difference in tone between my links and my journal here on my website:

’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.

My journal entries have been even more specifically negative of late. I’ve been bitchin’ and moanin’ about web browsers. But at least I’m an equal-opportunities bitcher and moaner.

I wish my journal weren’t so negative, but my mithering behaviour has been been encouraged. On more than one occasion, someone I know at a browser company has taken me aside to let me know that I should blog about any complaints I might have with their browser. It sounds counterintuitive, I know. But these blog posts can give engineers some ammunition to get those issues prioritised and fixed.

So my message to you is this: if there’s something about a web browser that you’re not happy with (or, indeed, if there’s something you’re really happy with), take the time to write it down and publish it.

Publish it on your website. You could post your gripes on Twitter but whinging on Jack’s website is just pissing in the wind. And I suspect you also might put a bit more thought into a blog post on your own site.

I know it’s a cliché to say that browser makers want to hear from developers—and I’m often cynical about it myself—but they really do want to know what we think. Share your thoughts. I’ll probably end up linking to what you write.

Facebook Container for Firefox

Firefox has a nifty extension—made by Mozilla—called Facebook Container. It does two things.

First of all, it sandboxes any of your activity while you’re on the facebook.com domain. The tab you’re in is isolated from all others.

Secondly, when you visit a site that loads a tracker from Facebook, the extension alerts you to its presence. For example, if a page has a share widget that would post to Facebook, a little fence icon appears over the widget warning you that Facebook will be able to track that activity.

It’s a nifty extension that I’ve been using for quite a while. Except now it’s gone completely haywire. That little fence icon is appearing all over the web wherever there’s a form with an email input. See, for example, the newsletter sign-up form in the footer of the Clearleft site. It’s happening on forms over on The Session too despite the rigourous-bordering-on-paranoid security restrictions in place there.

Hovering over the fence icon displays this text:

If you use your real email address here, Facebook may be able to track you.

That is, of course, false. It’s also really damaging. One of the worst things that you can do in the security space is to cry wolf. If a concerned user is told that they can ignore that warning, you’re lessening the impact of all warnings, even serious legitimate ones.

Sometimes false positives are an acceptable price to pay for overall increased security, but in this case, the rate of false positives can only decrease trust.

I tried to find out how to submit a bug report about this but I couldn’t work it out (and I certainly don’t want to file a bug report in a review) so I’m writing this in the hopes that somebody at Mozilla sees it.

What’s really worrying is that this might not be considered a bug. The release notes for the version of the extension that came out last week say:

Email fields will now show a prompt, alerting users about how Facebook can track users by their email address.

Like …all email fields? That’s ridiculous!

I thought the issue might’ve been fixed in the latest release that came out yesterday. The release notes say:

This release addresses fixes a issue from our last release – the email field prompt now only displays on sites where Facebook resources have been blocked.

But the behaviour is unfortunately still there, even on sites like The Session or Clearleft that wouldn’t touch Facebook resources with a barge pole. The fence icon continues to pop up all over the web.

I hope this gets sorted soon. I like the Facebook Container extension and I’d like to be able to recommend it to other people. Right now I’d recommed the opposite—don’t install this extension while it’s behaving so overzealously. If the current behaviour continues, I’ll be uninstalling this extension myself.

Update: It looks like a fix is being rolled out. Fingers crossed!

Implementors

The latest newsletter from The History Of The Web is a good one: The Browser Engine That Could. It’s all about the history of browsers and more specifically, rendering engines.

Jay quotes from a 1992 email by Tim Berners-Lee when there was real concern about having too many different browsers. But as history played out, the concern shifted to having too few different browsers.

I wrote about this—back when Edge switched to using Chromium—in a post called Unity where I compared it to political parties:

If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!

I talked about this some more with Brian and Stuart on the Igalia Chats podcast: Web Ecosystem Health (here’s the mp3 file).

In the discussion we dive deeper into the naunces of browser engine diversity; how it’s not the numbers that matter, but representation. The danger with one dominant rendering engine is that it would reflect one dominant set of priorities.

I think we’re starting to see this kind of battle between different sets of priorities playing out in the browser rendering engine landscape.

Webkit published a list of APIs they won’t be implementing in their current form because of security concerns around fingerprinting. Mozilla is taking the same stand. Google is much more gung-ho about implementing those APIs.

I think it’s safe to say that every implementor wants to ship powerful APIs and ensure security and privacy. The issue is with which gets priority. Using the language of principles and priorities, you could crudely encapsulate Apple and Mozilla’s position as:

Privacy, even over capability.

That design principle would pass the reversibility test. In fact, Google’s position might be represented as:

Capability, even over privacy.

I’m not saying Apple and Mozilla don’t value powerful APIs. I’m not saying Google doesn’t value privacy. I’m saying that Google’s priorities are different to Apple’s and Mozilla’s.

Alas, Alex is saying that Apple and Mozilla don’t value capability:

There is a contingent of browser vendors today who do not wish to expand the web platform to cover adjacent use-cases or meaningfully close the relevance gap that the shift to mobile has opened.

That’s very disappointing. It’s a cheap shot. As cheap as saying that, given Google’s business model, Chrome wouldn’t want to expand the web platform to provide better privacy and security.

CSS for all

There have been some great new CSS properties and values shipping in Firefox recently.

Miriam Suzanne explains the difference between the newer revert value and the older inherit, initial and unset values in a video on the Mozilla Developer channel:

display: revert;

In another video, Jen describes some new properties for styling underlines (on links, for example):

text-decoration-thickness:  0.1em;
text-decoration-color: red;
text-underline-offset: 0.2em;
text-decoration-skip-ink: auto;

Great stuff!

As far as I can tell, all of these properties are available to you regardless of whether you are serving your website over HTTP or over HTTPS. That may seem like an odd observation to make, but I invite you to cast your mind back to January 2018. That’s when the Mozilla Security Blog posted about moving to secure contexts everywhere:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

(emphasis mine)

Buzz Lightyear says to Woody: Secure contexts …secure contexts everywhere!

Despite that “effective immediately” clause, I haven’t observed any of the new CSS properties added in the past two years to be restricted to HTTPS. I’m glad about that. I wrote about this announcement at the time:

I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement.

There’s no official word from the Mozilla Security Blog about any change to their two-year old “effective immediately” policy, and the original blog post hasn’t been updated. Maybe we can all just pretend it never happened.

Browsers

Microsoft’s Edge browser is going to switch its rendering engine over to Chromium.

I am deflated and disappointed.

There’s just no sugar-coating this. I’m sure the decision makes sound business sense for Microsoft, but it’s not good for the health of the web.

Very soon, the vast majority of browsers will have an engine that’s either Blink or its cousin, WebKit. That may seem like good news for developers when it comes to testing, but trust me, it’s a sucky situation of innovation and agreement. Instead of a diverse browser ecosystem, we’re going to end up with incest and inbreeding.

There’s one shining exception though. Firefox. That browser was originally created to combat the seemingly unstoppable monopolistic power of Internet Explorer. Now that Microsoft are no longer in the rendering engine game, Firefox is once again the only thing standing in the way of a complete monopoly.

I’ve been using Firefox as my main browser for a while now, and I can heartily recommend it. You should try it (and maybe talk to your relatives about it at Christmas). At this point, which browser you use no longer feels like it’s just about personal choice—it feels part of something bigger; it’s about the shape of the web we want.

Jeffrey wrote that browser diversity starts with us:

The health of Firefox is critical now that Chromium will be the web’s de facto rendering engine.

Even if you love Chrome, adore Gmail, and live in Google Docs or Analytics, no single company, let alone a user-tracking advertising giant, should control the internet.

Andy Bell also writes about browser diversity:

I’ll say it bluntly: we must support Firefox. We can’t, as a community allow this browser engine monopoly. We must use Firefox as our main dev browsers; we must encourage our friends and families to use it, too.

Yes, it’s not perfect, nor are Mozilla, but we can help them to develop and grow by using Firefox and reporting issues that we find. If we just use and build for Chromium, which is looking likely (cough Internet Explorer monopoly cough), then Firefox will fall away and we will then have just one major engine left. I don’t ever want to see that.

Uncle Dave says:

If the idea of a Google-driven Web is of concern to you, then I’d encourage you to use Firefox. And don’t be a passive consumer; blog, tweet, and speak about its killer features. I’ll start: Firefox’s CSS Grid, Flexbox, and Variable Font tools are the best in the business.

Mozilla themselves came out all guns blazing when they said Goodbye, EdgeHTML:

Microsoft is officially giving up on an independent shared platform for the internet. By adopting Chromium, Microsoft hands over control of even more of online life to Google.

Tim describes the situation as risking a homogeneous web:

I don’t think Microsoft using Chromium is the end of the world, but it is another step down a slippery slope. It’s one more way of bolstering the influence Google currently has on the web.

We need Google to keep pushing the web forward. But it’s critical that we have other voices, with different viewpoints, to maintain some sense of balance. Monocultures don’t benefit anyone.

Andre Alves Garzia writes that while we Blink, we lose the web:

Losing engines is like losing languages. People may wish that everyone spoke the same language, they may claim it leads to easier understanding, but what people fail to consider is that this leads to losing all the culture and way of thought that that language produced. If you are a Web developer smiling and happy that Microsoft might be adopting Chrome, and this will make your work easier because it will be one less browser to test, don’t be! You’re trading convenience for diversity.

I like that analogy with language death. If you prefer biological analogies, it’s worth revisiting this fantastic post by Rachel back in August—before any of us knew about Microsoft’s decision—all about the ecological impact of browser diversity:

Let me be clear: an Internet that runs only on Chrome’s engine, Blink, and its offspring, is not the paradise we like to imagine it to be.

That post is a great history lesson, documenting how things can change, and how decisions can have far-reaching unintended consequences.

So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.

If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.

And it’s not likely to be replaced.

Ends and means

The latest edition of the excellent History Of The Web newsletter is called The Day(s) The Web Fought Back. It recounts the first time that websites stood up against bad legislation in the form of the Communications Decency Act (CDA), and goes to recount the even more effective use of blackout protests against SOPA and PIPA.

I remember feeling very heartened to see WikiPedia, Google and others take a stand on January 18th, 2012. But I also remember feeling uneasy. In this particular case, companies were lobbying for a cause I agreed with. But what if they were lobbying for a cause I didn’t agree with? Large corporations using their power to influence politics seems like a very bad idea. Isn’t it still a bad idea, even if I happen to agree with the cause?

Cloudflare quite rightly kicked The Daily Stormer off their roster of customers. Then the CEO of Cloudflare quite rightly wrote this in a company-wide memo:

Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.

There’s an uncomfortable tension here. When do the ends justify the means? Isn’t the whole point of having principles that they hold true even in the direst circumstances? Why even claim that corporations shouldn’t influence politics if you’re going to make an exception for net neutrality? Why even claim that free speech is sacrosanct if you make an exception for nazi scum?

Those two examples are pretty extreme and I can easily justify the exceptions to myself. Net neutrality is too important. Stopping fascism is too important. But where do I draw the line? At what point does something become “too important?”

There are more subtle examples of corporations wielding their power. Google are constantly using their monopoly position in search and browser marketshare to exert influence over website-builders. In theory, that’s bad. But in practice, I find myself agreeing with specific instances. Prioritising mobile-friendly sites? Sounds good to me. Penalising intrusive ads? Again, that seems okey-dokey to me. But surely that’s not the point. So what if I happen to agree with the ends being pursued? The fact that a company the size and power of Google is using their monopoly for any influence is worrying, regardless of whether I agree with the specific instances. But I kept my mouth shut.

Now I see Google abusing their monopoly again, this time with AMP. They may call the preferential treatment of Google-hosted AMP-formatted pages a “carrot”, but let’s be honest, it’s an abuse of power, plain and simple.

By the way, I have no doubt that the engineers working on AMP have the best of intentions. We are all pursuing the same ends. We all want a faster web. But we disagree on the means. If Google search results gave preferential treatment to any fast web pages, that would be fine. But by only giving preferential treatment to pages written in a format that they created, and hosted on their own servers, they are effectively forcing everyone to use AMP. I know for a fact that there are plenty of publications who are producing AMP content, not because they are sold on the benefits of the technology, but because they feel strong-armed into doing it in order to compete.

If the ends justify the means, then it’s easy to write off Google’s abuse of power. Those well-intentioned AMP engineers honestly think that they have the best interests of the web at heart:

We were worried about the web not existing anymore due to native apps and walled gardens killing it off. We wanted to make the web competitive. We saw a sense of urgency and thus we decided to build on the extensible web to build AMP instead of waiting for standard and browsers and websites to catch up. I stand behind this process. I’m a practical guy.

There’s real hubris and audacity in thinking that one company should be able to tackle fixing the whole web. I think the AMP team are genuinely upset and hurt that people aren’t cheering them on. Perhaps they will dismiss the criticisms as outpourings of “Why wasn’t I consulted?” But that would be a mistake. The many thoughtful people who are extremely critical of AMP are on the same side as the AMP team when it comes the end-goal of better, faster websites. But burning the web to save it? No thanks.

Ben Thompson goes into more detail on the tension between the ends and the means in The Aggregator Paradox:

The problem with Google’s actions should be obvious: the company is leveraging its monopoly in search to push the AMP format, and the company is leveraging its dominant position in browsers to punish sites with bad ads. That seems bad!

And yet, from a user perspective, the options I presented at the beginning — fast loading web pages with responsive designs that look great on mobile and the elimination of pop-up ads, ad overlays, and autoplaying videos with sounds — sounds pretty appealing!

From that perspective, there’s a moral argument to be made for wielding monopoly power like Google is doing. No doubt the AMP team feel it would be morally wrong for Google not to use its influence in search to give preferential treatment to AMP pages.

Going back to the opening examples of online blackouts, was it morally wrong for companies to use their power to influence politics? Or would it have been morally wrong for them not to have used their influence?

When do the ends justify the means?

Here’s a more subtle example than Google AMP, but one which has me just as worried for the future of the web. Mozilla announced that any new web features they add to their browser will require HTTPS.

The end-goal here is one I agree with: HTTPS everywhere. On the face of it, the means of reaching that goal seem reasonable. After all, we already require HTTPS for sensitive JavaScript APIs like geolocation or service workers. But the devil is in the details:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

Emphasis mine.

This is a step too far. Again, I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement. In some ways, I think it might actually be worse.

There’s an assumption in this decision that websites are being made by professionals who will know how to switch to HTTPS. But the web is for everyone. Not just for everyone to use. It’s for everyone to build.

One of my greatest fears for the web is that building it becomes the domain of a professional priesthood. Anything that raises the bar to writing some HTML or CSS makes me very worried. Usually it’s toolchains that make things more complex, but in this case the barrier to entry is being brought right into the browser itself.

I’m trying to imagine future Codebar evenings, helping people to make their first websites, but now having to tell them that some CSS will be off-limits until they meet the entry requirements of HTTPS …even though CSS and HTTPS have literally nothing to do with one another. (And yes, there will be an exception for localhost and I really hope there’ll be an exception for file: as well, but that’s simply postponing the disappointment.)

No doubt Mozilla (and the W3C Technical Architecture Group) believe that they are doing the right thing. Perhaps they think it would be morally wrong if browsers didn’t enforce HTTPS even for unrelated features like new CSS properties. They believe that, in this particular case, the ends justify the means.

I strongly disagree. If you also disagree, I encourage you to make your voice heard. Remember, this isn’t about whether you think that we should all switch to HTTPS—we’re all in agreement on that. This is about whether it’s okay to create collateral damage by deliberately denying people access to web features in order to further a completely separate agenda.

This isn’t about you or me. This is about all those people who could potentially become makers of the web. We should be welcoming them, not creating barriers for them to overcome.

Singapore

I was in Singapore last week. It was most relaxing. Sure, it’s Disneyland With The Death Penalty but the food is wonderful.

chicken rice fishball noodles laksa grilled pork

But I wasn’t just there to sample the delights of the hawker centres. I had been invited by Mozilla to join them on the opening leg of their Developer Roadshow. We assembled in the PayPal offices one evening for a rapid-fire round of talks on emerging technologies.

We got an introduction to Quantum, the new rendering engine in Firefox. It’s looking good. And fast. Oh, and we finally get support for input type="date".

But this wasn’t a product pitch. Most of the talks were by non-Mozillians working on the cutting edge of technologies. I kicked things off with a slimmed-down version of my talk on evaluating technology. Then we heard from experts in everything from CSS to VR.

The highlight for me was meeting Hui Jing and watching her presentation on CSS layout. It was fantastic! Entertaining and informative, it was presented with gusto. I think it got everyone in the room very excited about CSS Grid.

The Singapore stop was the only I was able to make, but Hui Jing has been chronicling the whole trip. Sounds like quite a whirlwind tour. I’m so glad I was able to join in even for a portion. Thanks to Sandra and Ali for inviting me along—much appreciated.

I’ll also be speaking at Mozilla’s View Source in London in a few weeks, where I’ll be talking about building blocks of the Indie Web:

In these times of centralised services like Facebook, Twitter, and Medium, having your own website is downright disruptive. If you care about the longevity of your online presence, independent publishing is the way to go. But how can you get all the benefits of those third-party services while still owning your own data? By using the building blocks of the Indie Web, that’s how!

‘Twould be lovely to see you there.

The Progressive Web App Dev Summit

I was in Amsterdam again at the start of last week for the Progressive Web App Dev Summit, organised by Google. Most of the talks were given by Google employees, but not all—this wasn’t just a European version of Google I/O. Representatives from Opera, Mozilla, Samsung, and Microsoft were also there, and there were quite a few case studies from independent companies. That was very gratifying to see.

Almost all the talks were related to progressive web apps. I say, “almost all” because there were occasional outliers. There was a talk on web components, which don’t have anything directly to do with progressive web apps (and I hope there won’t be any attempts to suggest otherwise), and another on rendering performance that had good advice for anyone building any kind of website. Most of the talks were about the building blocks of progressive web apps: HTTPS, Service Workers, push notifications, and all that jazz.

I was very pleased to see that there was a move away from the suggesting that single-page apps with the app-shell architecture model were the only way of building progressive web apps.

There were lots of great examples of progressively enhancing existing sites into progressive web apps. Jeff Posnick’s talk was a step-by-step walkthrough of doing exactly that. Reading through the agenda, I was really happy to see this message repeated again and again:

In this session we’ll take an online-only site and turn it into a fully network-resilient, offline-first installable progressive web app. We’ll also break out of the app shell and look at approaches that better-suit traditional server-driven sites.

Progressive Web Apps should work everywhere for every user. But what happens when the technology and API’s are not available for in your users browser? In this talk we will show you how you can think about and build sites that work everywhere.

Progressive Web Apps should load fast, work great offline, and progressively enhance to a better experience in modern browsers.

How do you put the “progressive” into your current web app?

You can (and should!) build for the latest and greatest browsers, but through a collection of fallbacks and progressive enhancements you can bring a lot tomorrow’s web to yesterday’s browsers.

I think this is a really smart move. It’s a lot easier to sell people on incremental changes than it is to convince them to rip everything out and start from scratch (another reason why I’m dubious about any association between web components and progressive web apps—but I’ll save that for another post).

The other angle that I really liked was the emphasis on emerging markets, not just wealthy westerners. Tal Oppenheimer’s talk Building for Billions was superb, and Alex kicked the whole thing off with some great facts and figures on mobile usage.

In my mind, these two threads are very much related. Progressive enhancement allows us to have our progressive web app cake and eat it too: we can make websites that can be accessed on devices with limited storage and slow networks, while at the same time ensuring those same sites take advantage of all the newest features in the latest and greatest browsers. I talked to a lot of Google devs about ways to measure the quality of a progressive web app, and I’m coming to the conclusion that a truly high-quality site is one that can still be accessed by a proxy browser like Opera Mini, while providing a turbo-charged experience in the latest version of Chrome. If you think that sounds naive or unrealistic, then I think you might want to dive deeper into all the technologies that make progressive web apps so powerful—responsive design, Service Workers, a manifest file, HTTPS, push notifications; all of those features can and should be used in a layered fashion.

Speaking of Opera, Andreas kind of stole the show, demoing the latest interface experiments in Opera Mobile.

That ambient badging that Alex was talking about? Opera is doing it. The importance of being able to access URLs that I’ve been ranting about? Opera is doing it.

Then we had the idea to somehow connect it to the “pull-to-refresh” spinner, as a secondary gesture to the left or right.

Nice! I’m looking forward to seeing what other browsers come up with it. It’s genuinely exciting to see all these different browser makers in complete agreement on which standards they want to support, while at the same time differentiating their products by competing on user experience. Microsoft recently announced that progressive web apps will be indexed in their app store just like native apps—a really interesting move.

The Progressive Web App Dev Summit wrapped up with a closing panel, that I had the honour of hosting. I thought it was very brave of Paul to ask me to host this, considering my strident criticism of Google’s missteps.

Initially there were going to be six people on the panel. Then it became eight. Then I blinked and it suddenly became twelve. Less of a panel, more of a jury. Half the panelists were from Google and the other half were from Opera, Microsoft, Mozilla, and Samsung. Some of those representatives were a bit too media-trained for my liking: Ali from Microsoft tried to just give a spiel, and Alex Komoroske from Google wouldn’t give me a straight answer about whether he wants Android Instant apps to succeed—Jake was a bit more honest. I should have channelled my inner Paxman a bit more.

Needless to say, nobody from Apple was at the event. No surprise there. They’ve already promised to come to the next event. There won’t be an Apple representative on stage, obviously—that would be asking too much, wouldn’t it? But at least it looks like they’re finally making an effort to engage with the wider developer community.

All in all, the Progressive Web App Dev Summit was good fun. I found the event quite inspiring, although the sausage festiness of the attendees was depressing. It would be good if the marketing for these events reached a wider audience—I met a lot of developers who only found out about it a week or two before the event.

I really hope that people will come away with the message that they can get started with progressive web apps right now without having to re-architect their whole site. Right now the barrier to entry is having your site running on HTTPS. Once you’ve got that up and running, it’s pretty much a no-brainer to add a manifest file and a basic Service Worker—to boost performance if nothing else. From there, you’re in a great position to incrementally add more and more features—an offline-first approach with your Service Worker, perhaps? Or maybe start dabbling in push notifications. The great thing about all of these technologies (with the glaring exception of web components in their current state) is that you don’t need to bet the farm on any of them. Try them out. Use them as enhancements. You’ve literally got nothing to lose …and your users have everything to gain.

This is for everyone with a certificate

Mozilla—like Google before them—have announced their plans for deprecating HTTP in favour of HTTPS. I’m all in favour of moving to HTTPS. I’ve done it myself here on adactio.com, on thesession.org, and on huffduffer.com. I have some concerns about the potential linkrot involved in the move to TLS everywhere—as outlined by Tim Berners-Lee—but still, anything that makes the work of GCHQ and the NSA more difficult is alright by me.

But I have a big, big problem with Mozilla’s plan to “encourage” the move to HTTPS:

Gradually phasing out access to browser features.

Requiring HTTPS for certain browser features makes total sense, given the security implications. Service Workers, for example, are quite correctly only available over HTTPS. Any API that has access to a device sensor—or that could be used for fingerprinting in any way—should only be available over HTTPS. In retrospect, Geolocation should have been HTTPS-only from the beginning.

But to deny access to APIs where there are no security concerns, where it is merely a stick to beat people with …that’s just wrong.

This is for everyone. Not just those smart enough to figure out how to add HTTPS to their site. And yes, I know, the theory is that is that it’s going to get easier and easier, but so far the steps towards making HTTPS easier are just vapourware. That makes Mozilla’s plan look like something drafted by underwear gnomes.

The issue here is timing. Let’s make HTTPS easy first. Then we can start to talk about ways of encouraging adoption. Hopefully we can figure out a way that doesn’t require Mozilla or Google as gatekeepers.

Sven Slootweg outlines the problems with Mozilla’s forced SSL. I highly recommend reading Yoav’s post on deprecating HTTP too. Ben Klemens has written about HTTPS: the end of an era …that era being the one in which anyone could make a website without having to ask permission from an app store, a certificate authority, or a browser manufacturer.

On the other hand, Eric Mill wrote We’re Deprecating HTTP And It’s Going To Be Okay. It makes for an extremely infuriating read because it outlines all the ways in which HTTPS is a good thing (all of which I agree with) without once addressing the issue at hand—a browser that deliberately cripples its feature set for political reasons.

South by CSS

South by Southwest has become a vast, sprawling festival with a preponderance of panels pitched at marketers, start-ups and people that use the words “social media” in their job title without irony. But there were also some great design and development talks if you looked for them.

Samantha gave a presentation on style tiles, which I unfortunately missed but I’ll be eagerly awaiting the release of the audio. I also missed some good meaty JavaScript talks but I did manage to make it along to Jen’s excellent introduction to HTML5 APIs.

Andy’s talk on CSS best practices was one of the best presentations I’ve ever seen. He did a fantastic job of tackling some really important topics. It’s a presentation (and a presenter) that deserves a wider audience, so if you’re involved in putting together the line-up for any front-end conferences, I highly recommend that you nab him.

Divya put together an absolutely killer panel called CSS.next, all about how CSS gets specced and shipped, and what’s coming down the line. All of the panelists were smart, articulate, and well-informed. The panel was very enlightening, as well as being thoroughly enjoyable.

And then there was the Browser Wars panel.

This is something of a SXSW tradition. Arun assembles a line-up of representatives from browser makers—Mozilla, Google, Microsoft, and Opera—and then peppers them with some hardball questions. Apple is invited to send a representative every year, and every year, Apple declines.

There was no shortage of contentious topics this year. The subject of Google Dart was raised (“Good luck with that,” said Brendan). There was also plenty of discussion about the recent DRM proposal submitted to the HTML working group. There was a disturbing level of agreement amongst all the panelists that some form of DRM for video was needed because, hey, that’s just the way things go…

As an aside, I must say I found the lack of imagination on display to be pretty disheartening. Two years ago, Chris was on the Browser Wars panel representing Microsoft, defending the EOT format because, hey, that’s just the way things go. Without some form of DRM, he argued, we couldn’t have fonts on the web. Well, the web found a way. Now Chris is representing Google but the argument remains the same. DRM, so the argument goes, is the only way we’ll get video on the web because that’s what the “rights holders” demand. And yet, if you are a photographer, no such special consideration is afforded to you. The img element has no DRM and people are managing just fine, thank you. Video, apparently, is a special case …just like fonts. ahem

Anyway…

The subject of vendor prefixes also came up. Specifically, the looming prospect of non-webkit browsers parsing -webkit prefixed properties was raised.

I saw a pattern amongst all three subjects: the DRM proposal, Dart, and browsers implementing another browser’s vendor prefix. All three proposals were made to address a genuine problem. The proposals all suffer from varying degrees of batshit craziness but they certainly galvanised a lot of discussion.

For example, Brendan said that while Google Dart may not stand a hope in hell of supplanting JavaScript, some of the ideas it contains may well end up influencing the development of ECMAScript.

Similarly, Mozilla’s plan for vendor-prefixing certainly caused all parties to admit the problem: the W3C was moving too slow; Apple should have submitted proprietary properties for standardisation sooner; Mozilla, Microsoft, and Opera should have been innovating faster; and web developers should have been treating vendor-prefixed properties as experimental features, not stable parts of a spec.

So the proposal to do something batshit crazy and implement -webkit-prefixed CSS properties has actually had some very positive effects …but there’s no reason to actually go ahead and do it!

I tried to make this point during the audience participation part of the panel, but it was like banging my head against a brick wall. Chaals kept repeating the problem case, but I wasn’t disputing the problem; I was trying to point out that the proposed solution wouldn’t fix the problem.

It was a classic case of the same kind of thinking we saw in the SOPA proposal:

  1. Something must be done!
  2. Implementing -webkit prefixes is something.
  3. Something has been done.

The problem is that it won’t work. Adding “like Webkit” to the user-agent string will probably have much more of an effect and frankly, I don’t care if any of the browsers do that. At this point, a little bit more pissing into the bloated cesspool of user-agent strings is hardly going to matter. A browser’s user-agent string isn’t an identifier, it’s a reverse-chronological history of the web. Why not update the history booklet to include the current predilection amongst developers for Webkit browsers on mobile?

But implementing -webkit vendor prefixes? Pointless! If a developer is only building and testing their sites for one class of device or browser, simply implementing that browser’s prefixed CSS is just putting a band aid on a decapitation.

So I was kind of hoping that Mozilla would just come right out and say that maybe they wouldn’t actually go ahead and do this but hey, look at all the great discussion it generated (just like Dart, just like the DRM proposal). But sadly, no. Brendan categorically stated that the proposal was not presented in order to foment discussion. And in follow-up tweets, he wrote that he actually expected it to level the mobile browser playing field. That’s an admirably optimistic viewpoint but it’s sadly self-delusional.

And what will happen when implementing -webkit prefixes fails to level the playing field? We’ll be left with deliberately broken browsers.

Once something ships in a browser, it’s very, very hard to ever remove it. During the Dart discussion, Chris talked about the possibility of removing Dart from Chrome if developers don’t take to it. Turning to the Microsoft representative he asked rhetorically, “I mean, do you guys still ship VBScript?”

The answer?

“Yes.”

Prix Fixe

A year and a half ago, Eric wrote a great article in A List Apart called Prefix or Posthack. It’s a balanced look at vendor prefixes in CSS that concludes in their favour:

If the history of web standards has shown us anything, it’s that hacks will be necessary. By front-loading the hacks using vendor prefixes and enshrining them in the standards process, we can actually fix some of the potential problems with the process and possibly accelerate CSS development.

So the next time you find yourself grumbling about declaring the same thing four times, once for each browser, remember that the pain is temporary. It’s a little like a vaccine—the shot hurts now, true, but it’s really not that bad in comparison to the disease it prevents.

Henri disagrees. He wrote a post called Vendor Prefixes Are Hurting the Web:

In practice, vendor prefixes lead to a situation where Web author have to say the same thing in a different way to each browser. That’s the antithesis of having Web standards. Standards should enable authors to write to a standard and have it work in implementations from multiple vendors.

Daniel Glazman wrote a point-by-point rebuttal to Henri’s post called CSS vendor prefixes, an answer to Henri Sivonen that’s well worth a read. Alex also wrote a counter-argument to Henri’s post called Vendor Prefixes Are A Rousing Success that echoes some of the points Eric made in his ALA article:

The standards process needs to lag implementations, which means that we need spaces for implementations to lead in. CSS vendor prefixes are one of the few shining examples of this working in practice.

Alex’s co-worker Paul disagrees. He recently wrote Vendor Prefixes Are Not Developer-friendly:

  1. Prefixes are not developer-friendly.
  2. Recent features would have been in a much better state without prefixes.
  3. Implementor maneuverability is not hampered without prefixes.

All of this would have remained a fairly academic discussion but for a bombshell dropped by Tantek at a face-to-face meeting of the CSS Working Group in Paris:

At this point we’re trying to figure out which and how many -webkit- prefix properties to actually implement support for in Mozilla.

The superficial issue is that web developers have been implementing -webkit- properties without then adding the non-prefixed standardised version (and without adding the corresponding prefixes of other vendors). The more fundamental problem is that while vendor prefixes were intended to introduce experimental features until those features became standardised, the reality is that the prefixed version ends up being supported in perpetuity. Nobody is happy about this situation but that’s the unfortunate reality.

Among the unhappy voices are:

Once again, Eric sought to bring clarity to the situation in the form of an article on A List Apart, this time publishing an interview with Tantek. Alex also popped up again, writing a post called Misdirection which addresses what he feels are some fundamental assumptions being made in the interview.

Finally, Mozilla engineer Robert O’Callahan—who I chatted with briefly at Kiwi Foo Camp about the vendor prefix situation—wrote about Alternatives To Supporting -webkit Prefixes In Other Engines in which he makes clear that evangelism efforts like Christian’s, while entirely laudable, aren’t a realistic solution to the problem.

It’s all a bit of a mess really, with lots of angry finger-pointing: at Apple, at Mozilla, at web developers, at the W3C…

My own feelings match those of Eric, who wrote:

I’d love to be proven wrong, but I have to assume the vendors will push ahead with this regardless. … I don’t mean to denigrate or undermine any of the efforts I mentioned before—they’re absolutely worth doing even if every non-WebKit browser starts supporting -webkit- properties next week. If nothing else, it will serve as evidence of your commitment to professional craftsmanship.