Journal tags: monopoly

8

Switching costs

Cory has published the transcript of his talk at the Transmediale festival in Berlin. It’s all about enshittification, and what we can collectively do to reverse it.

He succinctly describes the process of enshittification like this:

First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

More importantly, he describes the checks and balances that keep enshittification from happening, all of which have been dismantled over time: competition, regulation, self-help, and workers.

One of the factors that allows enshittification to proceed is a high switching cost:

Switching costs are everything you have to give up when you leave a product or service. In Facebook’s case, it was all the friends there that you followed and who followed you. In theory, you could have all just left for somewhere else; in practice, you were hamstrung by the collective action problem.

It’s hard to get lots of people to do the same thing at the same time.

We’ve seen this play out over at Twitter, where people I used to respect are still posting there as if it hasn’t become a cesspool of far-right racist misogyny reflecting its new owner’s values. But for a significant amount of people—including myself and anyone with a modicum of decency—the switching cost wasn’t enough to stop us getting the hell out of there. Echoing Robin’s observation, Cory says:

…the difference between “I hate this service but I can’t bring myself to quit it,” and “Jesus Christ, why did I wait so long to quit? Get me the hell out of here!” is razor thin.

If users can’t leave because everyone else is staying, when when everyone starts to leave, there’s no reason not to go, too.

That’s terminal enshittification, the phase when a platform becomes a pile of shit. This phase is usually accompanied by panic, which tech bros euphemistically call ‘pivoting.’

Anyway, I bring this up because I recently read something else about switching costs, but in a very different context. Jake Lazaroff was talking about JavaScript frameworks:

I want to talk about one specific weakness of JavaScript frameworks: interoperability, or the lack thereof. Almost without exception, each framework can only render components written for that framework specifically.

As a result, the JavaScript community tends to fragment itself along framework lines. Switching frameworks has a high cost, especially when moving to a less popular one; it means leaving most of the third-party ecosystem behind.

That switching cost stunts framework innovation by heavily favoring incumbents with large ecosystems.

Sounds a lot like what Cory was describing with incumbents like Google, Facebook, Twitter, and Amazon.

And let’s not kid ourselves, when we’re talking about incumbent client-side JavaScript frameworks, we might mention Vue or some other contender, but really we’re talking about React.

React has massive switching costs. For over a decade now, companies have been hiring developers based on one criterion: do they know React?

“An expert in CSS you say? No thanks.”

“Proficient in vanilla JavaScript? Don’t call us, we’ll call you.”

Heck, if I were advising someone who was looking for a job in front-end development (as opposed to actually being good at front-end development; two different things), I’d tell them to learn React.

Just as everyone ended up on Facebook because everyone was on Facebook, everyone ended up using React because everyone was using React.

You can probably see where I’m going with this: the inevitable enshittification of React.

Just to be clear, I’m not talking about React getting shittier in terms of what it does. It’s always been a shitty technology for end users:

React is legacy tech from 2013 when browsers didn’t have template strings or a BFCache.

No, I’m talking about the enshittification of the developer experience …the developer experience being the thing that React supposedly has going for it, though as Simon points out, the developer experience has always been pretty crap:

Whether on purpose or not, React took advantage of this situation by continuously delivering or promising to deliver changes to the library, with a brand new API being released every 12 to 18 months. Those new APIs and the breaking changes they introduce are the new shiny objects you can’t help but chase. You spend multiple cycles learning the new API and upgrading your application. It sure feels like you are doing something, but in reality, you are only treading water.

Well, it seems like the enshittification of the React ecosystem is well underway. Cassidy is kind of annoyed at React. Tom is increasingly miffed about the state of React releases, and Matteo asks React, where are you going?

Personally, I would love it if more people were complaining about the dreadful user experience inflicted by client-side React. Instead the complaints are universally about the developer experience.

I guess doing the right thing for the wrong reasons is fine. It’s just a little dispiriting.

I sometimes feel like I’m living that old joke, where I’m the one in the restaurant saying “the food here is terrible!” and most of my peers are saying “I know! And such small portions!”

2.5.6

The Competition and Markets Authority (CMA) recently published an interim report on their mobile ecosystems market study. It’s well worth reading, especially the section on competition in the supply of mobile browsers:

On iOS devices, Apple bans the use of alternative browser engines – this means that Apple has a monopoly over the supply of browser engines on iOS. It also chooses not to implement – or substantially delays – a wide range of features in its browser engine. This restriction has 2 main effects:

  • limiting rival browsers’ ability to differentiate themselves from Safari on factors such as speed and functionality, meaning that Safari faces less competition from other browsers than it otherwise could do; and
  • limiting the functionality of web apps – which could be an alternative to native apps as a means for mobile device users to access online content – and thereby limits the constraint from web apps on native apps. We have not seen compelling evidence that suggests Apple’s ban on alternative browser engines is justified on security grounds.

That last sentence is a wonderful example of British understatement. Far from protecting end users from security exploits, Apple have exposed everyone on iOS to all of the security issues of Apple’s Safari browser (regardless of what brower the user thinks they are using).

The CMA are soliciting responses to their interim report:

To respond to this consultation, please email or post your submission to:

Email: mobileecosystems@cma.gov.uk

Post: 


Mobile Ecosystems Market Study
Competition and Markets Authority

25 Cabot Square

London

E14 4QZ

Please respond by no later than 5pm GMT on 7 February 2022.

I encourage you to send a response before this coming Monday. This is the email I’ve sent.

Hello,

This response is regarding competition in the supply of mobile browsers and contains no confidential information.

I read your interim report with great interest.

As a web developer and the co-founder of a digital design agency, I could cite many reasons why Apple’s moratorium on rival browser engines is bad for business. But the main reason I am writing to you is as a consumer and a user of Apple’s products.

I own two Apple computing devices: a laptop and a phone. On both devices, I can install apps from Apple’s App Store. But on my laptop I also have the option to download and install an application from elsewhere. I can’t do this on my phone. That would be fine if my needs were met by what’s available in the app store. But clause 2.5.6 of Apple’s app store policy restricts what is available to me as a consumer.

On my laptop I can download and install Mozilla’s Firefox or Google’s Chrome browsers. On my phone, I can install something called Firefox and something called Chrome. But under the hood, they are little more than skinned versions of Safari. I’m only aware of this because I’m au fait with the situation. Most of my fellow consumers have no idea that when they install the app called Firefox or the app called Chrome from the app store on their phone, they are being deceived.

It is this deception that bothers me most.

Kind regards,

Jeremy Keith

To be fair to Apple, this deception requires collusion from Mozilla, Google, Microsoft, and other browser makers. Nobody’s putting a gun to their heads and forcing them to ship skinned versions of Safari that bear only cosmetic resemblance to their actual products.

But of course it would be commercially unwise to forego the app store as a distrubution channel, even if the only features they can ship are superficial ones like bookmark syncing.

Still, imagine what would happen if Mozilla, Google, and Microsoft put their monies where their mouths are. Instead of just complaining about the unjust situation, what if they actually took the financial hit and pulled their faux-browsers from the iOS app store?

If this unjustice is as important as representatives from Google, Microsoft, and Mozilla claim it is, then righteous indignation isn’t enough. Principles without sacrifice are easy.

If nothing else, it would throw the real situation into light and clear up the misconception that there is any browser choice on iOS.

I know it’s not going to happen. I also know I’m being a hypocrite by continuing to use Apple products in spite of the blatant misuse of monopoly power on display. But still, I wanted to plant that seed. What if Microsoft, Google, and Mozilla were the ones who walk away from Omelas.

Writing on web.dev

Chrome Dev Summit kicked off yesterday. The opening keynote had its usual share of announcements.

There was quite a bit of talk about privacy, which sounds good in theory, but then we were told that Google would be partnering with “industry stakeholders.” That’s probably code for the kind of ad-tech sharks that have been making a concerted effort to infest W3C groups. Beware.

But once Una was on-screen, the topics shifted to the kind of design and development updates that don’t have sinister overtones.

My favourite moment was when Una said:

We’re also partnering with Jeremy Keith of Clearleft to launch Learn Responsive Design on web.dev. This is a free online course with everything you need to know about designing for the new responsive web of today.

This is what’s been keeping me busy for the past few months (and for the next month or so too). I’ve been writing fifteen pieces—or “modules”—on modern responsive web design. One third of them are available now at web.dev/learn/design:

  1. Introduction
  2. Media queries
  3. Internationalization
  4. Macro layouts
  5. Micro layouts

The rest are on their way: typography, responsive images, theming, UI patterns, and more.

I’ve been enjoying this process. It’s hard work that requires me to dive deep into the nitty-gritty details of lots of different techniques and technologies, but that can be quite rewarding. As is often said, if you truly want to understand something, teach it.

Oh, and I made one more appearance at the Chrome Dev Summit. During the “Ask Me Anything” section, quizmaster Una asked the panelists a question from me:

Given the court proceedings against AMP, why should anyone trust FLOC or any other Google initiatives ostensibly focused on privacy?

(Thanks to Jake for helping craft the question into a form that could make it past the legal department but still retain its spiciness.)

The question got a response. I wouldn’t say it got an answer. My verdict remains:

I’m not sure that Google Chrome can be considered a user agent.

The fundamental issue is that you’ve got a single company that’s the market leader in web search, the market leader in web advertising, and the market leader in web browsers. I honestly believe all three would function better—and more honestly—if they were separate entities.

Monopolies aren’t just damaging for customers. They’re damaging for the monopoly too. I’d love to see Google Chrome compete on being a great web browser without having to also balance the needs of surveillance-based advertising.

Resigning from the AMP advisory committee

Inspired by Terence Eden’s example, I applied for membership of the AMP advisory committee last year. To my surprise, my application was successful.

I’ve spent the time since then participating in good faith, but I can’t do that any longer. Here’s what I wrote in my resignation email:

Hi all,

As mentioned at the end of the last call, I’m stepping down from the AMP advisory committee.

I can’t in good faith continue to advise on the AMP project for the OpenJS Foundation when it has become clear to me that AMP remains a Google product, with only a subset of pieces that could even be considered open source.

If I were to remain on the advisory committee, my feelings of resentment about this situation would inevitably affect my behaviour. So it’s best for everyone if I step away now instead of descending into outright sabotage. It’s not you, it’s me.

I’d like to thank the OpenJS Foundation for allowing me to participate. It’s been an honour to watch Tobie and Jory in action.

I wish everyone well and I hope that the advisory committee can successfully guide the AMP project towards a happy place where it can live out its final days in peace.

I don’t have a replacement candidate to nominate but I’ll ask around amongst other independent sceptical folks to see if there’s any interest.

All the best,

Jeremy

I wrote about the fundamental problem with Google AMP when I joined the advisory committee:

This is an interesting time for AMP …whatever AMP is.

See, that’s been a problem with Google AMP from the start. There are multiple defintions of what AMP is.

There’s the collection of web components. If that were all AMP is, it would be a very straightforward project, similar to other collections of web components (like Polymer). But then there’s the concept of validation. The validation comes from a set of rules, defined by Google. And there’s the AMP cache, or more accurately, Google hosting.

Only one piece of that trinity—the collection of web components—is eligible for the label of being open source, and even that’s a stretch considering that most of the contributions come from full-time Google employees. The other two parts are firmly under Google’s control.

I was hoping it was a marketing problem. We spent a lot of time on the advisory committee trying to figure out ways of making it clearer what AMP actually is. But it was a losing battle. The phrase “the AMP project” is used to cover up the deeply interwingled nature of its constituent parts. Bits of it are open source, but most of it is proprietary. The OpenJS Foundation doesn’t seem like a good home for a mostly-proprietary project.

Whenever a representative from Google showed up at an advisory committee meeting, it was clear that they viewed AMP as a Google product. I never got the impression that they planned to hand over control of the project to the OpenJS Foundation. Instead, they wanted to hear what people thought of their project. I’m not comfortable doing that kind of unpaid labour for a large profitable organisation.

Even worse, Google representatives reminded us that AMP was being used as a foundational technology for other Google products: stories, email, ads, and even some weird payment thing in native Android apps. That’s extremely worrying.

While I was serving on the AMP advisory committee, a coalition of attorneys general filed a suit against Google for anti-competitive conduct:

Google designed AMP so that users loading AMP pages would make direct communication with Google servers, rather than publishers’ servers. This enabled Google’s access to publishers’ inside and non-public user data.

We were immediately told that we could not discuss an ongoing court case in the AMP advisory committee. That’s fair enough. But will it go both ways? Or will lawyers acting on Google’s behalf be allowed to point to the AMP advisory committee and say, “But AMP is an open source project! Look, it even resides under the banner of the OpenJS Foundation.”

If there’s even a chance of the AMP advisory committee being used as a Potempkin village, I want no part of it.

But even as I’m noping out of any involvement with Google AMP, my parting words have to be about how impressed I am with the OpenJS Foundation. Jory and Tobie have been nothing less than magnificent in their diplomacy, cat-herding, schedule-wrangling, timekeeping, and other organisational superpowers that I’m crap at.

I sincerely hope that Google isn’t taking advantage of the OpenJS Foundation’s kind-hearted trust.

Ampvisory

I was very inspired by something Terence Eden wrote on his blog last year. A report from the AMP Advisory Committee Meeting:

I don’t like AMP. I think that Google’s Accelerated Mobile Pages are a bad idea, poorly executed, and almost-certainly anti-competitive.

So, I decided to join the AC (Advisory Committee) for AMP.

Like Terence, I’m not a fan of Google AMP—my initially positive reaction to it soured over time as it became clear that Google were blackmailing publishers by privileging AMP pages in Google Search. But all I ever did was bitch and moan about it on my website. Terence actually did something.

So this year I put myself forward as a candidate for the AMP advisory committee. I have no idea how the election process works (or who does the voting) but thanks to whoever voted for me. I’m now a member of the AMP advisory committee. If you look at that blog post announcing the election results, you’ll see the brief blurb from everyone who was voted in. Most of them are positively bullish on AMP. Mine is not:

Jeremy Keith is a writer and web developer dedicated to an open web. He is concerned that AMP is being unfairly privileged by Google’s search engine instead of competing on its own merits.

The good news is that main beef with AMP is already being dealt with. I wanted exactly what Terence said:

My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.

That’s happening as of May of this year. Just as well—the AMP advisory committee have absolutely zero influence on Google search. I’m not sure how much influence we have at all really.

This is an interesting time for AMP …whatever AMP is.

See, that’s been a problem with Google AMP from the start. There are multiple defintions of what AMP is. At the outset, it seemed pretty straightforward. AMP is a format. It has a doctype and rules that you have to meet in order to be “valid” AMP. Part of that ruleset involved eschewing HTML elements like img and video in favour of web components like amp-img and amp-video.

That messaging changed over time. We were told that AMP is the collection of web components. If that’s the case, then I have no problem at all with AMP. People are free to use the components or not. And if the project produces performant accessible web components, then that’s great!

But right now it’s not at all clear which AMP people are talking about, even in the advisory committee. When we discuss improving AMP, do we mean the individual components or the set of rules that qualify an AMP page being “valid”?

The use-case for AMP-the-format (as opposed to AMP-the-library-of-components) was pretty clear. If you were a publisher and you wanted to appear in the top stories carousel in Google search, you had to publish using AMP. Just using the components wasn’t enough. Your pages had to be validated as AMP-the-format.

That’s no longer the case. From May, pages that are fast enough will qualify for the top stories carousel. What will publishers do then? Will they still maintain separate AMP-the-format pages? Time will tell.

I suspect publishers will ditch AMP-the-format, although it probably won’t happen overnight. I don’t think anyone likes being blackmailed by a search engine:

An engineer at a major news publication who asked not to be named because the publisher had not authorized an interview said Google’s size is what led publishers to use AMP.

The pre-rendering (along with the lightning bolt) that happens for AMP pages in Google search might be a reason for publishers to maintain their separate AMP-the-format pages. But I suspect publishers don’t actually think the benefits of pre-rendering outweigh the costs: pre-rendered AMP-the-format pages are served from Google’s servers with a Google URL. If anything, I think that publishers will look forward to having the best of both worlds—having their pages appear in the top stories carousel, but not having their pages hijacked by Google’s so-called-cache.

Does AMP-the-format even have a future without Google search propping it up? I hope not. I think it would make everything much clearer if AMP-the-format went away, leaving AMP-the-collection-of-components. We’d finally see these components being evaluated on their own merits—usefulness, performance, accessibility—without unfair interference.

So my role on the advisory committee so far has been to push for clarification on what we’re supposed to be advising on.

I think it’s good that I’m on the advisory committee, although I imagine my opinions could easily be be dismissed given my public record of dissent. I may well be fooling myself though, like those people who go to work at Facebook and try to justify it by saying they can accomplish more from inside than outside (or whatever else they tell themselves to sleep at night).

The topic I’ve volunteered to help with is somewhat existential in nature: what even is AMP? I’m happy to spend some time on that. I think it’ll be good for everyone to try to get that sorted, regardless about how you feel about the AMP project.

I have no intention of giving any of my unpaid labour towards the actual components themselves. I know AMP is theoretically open source now, but let’s face it, it’ll always be perceived as a Google-led project so Google can pay people to work on it.

That said, I’ve also recently joined a web components community group that Lea instigated. Remember she wrote that great blog post recently about the failed promise of web components? I’m not sure how much I can contribute to the group (maybe some meta-advice on the nature of good design principles?) but at the very least I can serve as a bridge between the community group and the AMP advisory committee.

After all, AMP is a collection of web components. Maybe.

Insecure

Universal access is at the heart of the World Wide Web. It’s also something I value when I’m building anything on the web. Whatever I’m building, I want you to be able to visit using whatever browser or device that you choose.

Just to be clear, that doesn’t mean that you’re going to have the same experience in an old browser as you are in the latest version of Firefox or Chrome. Far from it. Not only is that not feasible, I don’t believe it’s desirable either. But if you’re using an old browser, while you might not get to enjoy the newest CSS or JavaScript, you should still be able to access a website.

Applying the principle of progressive enhancement makes this emminently doable. As long as I build in a layered way, everyone gets access to the barebones HTML, even if they can’t experience newer features. Crucially, as long as I’m doing some feature detection, those newer features don’t harm older browsers.

But there’s one area where maintaining backward compatibility might well have an adverse effect on modern browsers: security.

I don’t just mean whether or not you’re serving sites over HTTPS. Even if you’re using TLS—Transport Layer Security—not all security is created equal.

Take a look at Mozilla’s very handy SSL Configuration Generator. You get to choose from three options:

  1. Modern. Services with clients that support TLS 1.3 and don’t need backward compatibility.
  2. Intermediate. General-purpose servers with a variety of clients, recommended for almost all systems.
  3. Old. Compatible with a number of very old clients, and should be used only as a last resort.

Because I value universal access, I should really go for the “old” setting. That ensures my site is accessible all the way back to Android 2.3 and Safari 1. But if I do that, I will be supporting TLS 1.0. That’s not good. My site is potentially vulnerable.

Alright then, I’ll go for “intermediate”—that’s the recommended level anyway. Now I’m no longer providing TLS 1.0 support. But that means some older browsers can no longer access my site.

This is exactly the situation I found myself in with The Session. I had a score of A+ from SSL Labs. I was feeling downright smug. Then I got emails from actual users. One had picked up an old Samsung tablet second hand. Another was using an older version of Safari. Neither could access the site.

Sure enough, if you cut off TLS 1.0, you cut off Safari below version six.

Alright, then. Can’t they just upgrade? Well …no. Apple has tied Safari to OS X. If you can’t upgrade your operating system, you can’t upgrade your browser. So if you’re using OS X Mountain Lion, you’re stuck with an insecure version of Safari.

Fortunately, you can use a different browser. It’s possible to install, say, Firefox 37 which supports TLS 1.2.

On desktop, that is. If you’re using an older iPhone or iPad and you can’t upgrade to a recent version of iOS, you’re screwed.

This isn’t an edge case. This is exactly the kind of usage that iPads excel at: you got the device a few years back just to do some web browsing and not much else. It still seems to work fine, and you have no incentive to buy a brand new iPad. And nor should you have to.

In that situation, you’re stuck using an insecure browser.

As a site owner, I can either make security my top priority, which means you’ll no longer be able to access my site. Or I can provide you access, which makes my site less secure for everyone. (That’s what I’ve done on The Session and now my score is capped at B.)

What I can’t do is tell you to install a different browser, because you literally can’t. Sure, technically you can install something called Firefox from the App Store, or you can install something called Chrome. But neither have anything to do with their desktop counterparts. They’re differently skinned versions of Safari.

Apple refuses to allow browsers with any other rendering engine to be installed. Their reasoning?

Security.

Unity

It’s official. Microsoft’s Edge browser is running on the Blink rendering engine and it’s available now.

Just over a year ago, I wrote about my feelings on this decision:

I’m sure the decision makes sound business sense for Microsoft, but it’s not good for the health of the web.

The importance of browser engine diversity is beautifully illustrated (literally) in Rachel’s The Ecological Impact of Browser Diversity.

But I was chatting to Amber the other day, and I mentioned how I can see the theoretical justification for Microsoft’s decision …even if I don’t quite buy it myself.

Picture, if you will, something I’ll call the bar of unity. It’s a measurement of how much collaboration is happening between browser makers.

In the early days of the web, the bar of unity was very low indeed. The two main browser vendors—Microsoft and Netscape—not only weren’t collaborating, they were actively splintering the languages of the web. One of them would invent a new HTML element, and the other would invent a completely different element to do the same thing (remember abbr and acronym). One of them would come up with one model for interacting with a document through JavaScript, and the other would come up with a completely different model to the same thing (remember document.all and document.layers).

There wasn’t enough collaboration. Our collective anger at this situation led directly to the creation of The Web Standards Project.

Eventually, those companies did start collaborating on standards at the W3C. The bar of unity was raised.

This has been the situation for most of the web’s history. Different browser makers agreed on standards, but went their own separate ways on implementation. That’s where they drew the line.

Now that line is being redrawn. The bar of unity is being raised. Now, a number of separate browser makers—Google, Samsung, Microsoft—not only collaborate on standards but also on implementation, sharing a codebase.

The bar of unity isn’t right at the top. Browsers can still differentiate in their user interfaces. Edge, for example, can—and does—offer very sensible defaults for blocking trackers. That’s much harder for Chrome to do, given that Google are amongst the worst offenders.

So these browsers are still competing, but the competition is no longer happening at the level of the rendering engine.

I can see how this looks like a positive development. In fact, from this point of view, Mozilla are getting in the way of progress by having a separate codebase (yes, this is a genuinely-held opinion by some people).

On the face of it, more unity sounds good. It sounds like more collaboration. More cooperation.

But then I think of situations where complete unity isn’t necessarily a good thing. Take political systems, for example. If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!

There’s a sweet spot somewhere in between where there’s a base of level of agreement and cooperation, but there’s also plenty of room for disagreement and opposition. Right now, the browser landscape is just about still in that sweet spot. It’s like a two-party system where one party has a crushing majority. Checks and balances exist, but they’re in peril.

Firefox is one of the last remaining representatives offering an alternative. The least we can do is support it.

AMPstinction

I’ve come to believe that the goal of any good framework should be to make itself unnecessary.

Brian said it explicitly of his PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

That makes total sense, especially if your code is a polyfill—those solutions are temporary by design. Autoprefixer is another good example of a piece of code that becomes less and less necessary over time.

But I think it’s equally true of any successful framework or library. If the framework becomes popular enough, it will inevitably end up influencing the standards process, thereby becoming dispensible.

jQuery is the classic example of this. There’s very little reason to use jQuery these days because you can accomplish so much with browser-native JavaScript. But the reason why you can accomplish so much without jQuery is because of jQuery. I don’t think we would have querySelector without jQuery. The library proved the need for the feature. The same is true for a whole load of DOM scripting features.

The same process is almost certain to occur with React—it’s a good bet there will be a standardised equivalent to the virtual DOM at some point.

When Google first unveiled AMP, its intentions weren’t clear to me. I hoped that it existed purely to make itself redundant:

As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

Alas, as time has passed, that hope shows no signs of being fulfilled. If anything, I’ve noticed publishers using the existence of their AMP pages as a justification for just letting their “regular” pages put on weight.

Worse yet, the messaging from Google around AMP has shifted. Instead of pitching it as a format for creating parallel versions of your web pages, they’re now also extolling the virtues of having your AMP pages be the only version you publish:

In fact, AMP’s evolution has made it a viable solution to build entire websites.

On an episode of the Dev Mode podcast a while back, AMP was a hotly-debated topic. But even those defending AMP were doing so on the understanding that it was more a proof-of-concept than a long-term solution (and also that AMP is just for news stories—something else that Google are keen to change).

But now it’s clear that the Google AMP Project is being marketed more like a framework for the future: a collection of web components that prioritise performance …which is kind of odd, because that’s also what Google’s Polymer project is. The difference being that pages made with Polymer don’t get preferential treatment in Google’s search results. I can’t help but wonder how the Polymer team feels about AMP’s gradual pivot onto their territory.

If the AMP project existed in order to create a web where AMP was no longer needed, I think I could get behind it. But the more it’s positioned as the only viable solution to solving performance, the more uncomfortable I am with it.

Which, by the way, brings me to one of the most pernicious ideas around Google AMP—positioning anyone opposed to it as not caring about web performance. Nothing could be further from the truth. It’s precisely because performance on the web is so important that it deserves a long-term solution, co-created by all of us: not some commandents delivered to us from on-high by one organisation, enforced by preferential treatment by that organisation’s monopoly in search.

It’s the classic logical fallacy:

  1. Performance! Something must be done!
  2. AMP is something.
  3. Now something has been done.

By marketing itself as the only viable solution to the web performance problem, I think the AMP project is doing itself a great disservice. If it positioned itself as an example to be emulated, I would welcome it.

I wish that AMP were being marketed more like a temporary polyfill. And as with any polyfill, I look forward to the day when AMP is no longer necesssary.

I want AMP to become extinct. I genuinely think that the Google AMP team should share that wish.