Journal tags: enshittification

4

Filters

My phone rang today. I didn’t recognise the number so although I pressed the big button to answer the call, I didn’t say anything.

I didn’t say anything because usually when I get a call from a number I don’t know, it’s some automated spam. If I say nothing, the spam voice doesn’t activate.

But sometimes it’s not a spam call. Sometimes after a few seconds of silence a human at the other end of the call will say “Hello?” in an uncertain tone. That’s the point when I respond with a cheery “Hello!” of my own and feel bad for making this person endure those awkward seconds of silence.

Those spam calls have made me so suspicious that real people end up paying the price. False positives caught in my spam-detection filter.

Now it’s happening on the web.

I wrote about how Google search, Bing, and Mozilla Developer network are squandering trust:

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

But it’s not just limited to specific companies. I’ve noticed more and more suspicion related to any online activity.

I’ve seen members of a community site jump to the conclusion that a new member’s pattern of behaviour was a sure sign that this was a spambot. But it could just as easily have been the behaviour of someone who isn’t neurotypical or who doesn’t speak English as their first language.

Jessica was looking at some pictures on an AirBnB listing recently and found herself examining some photos that seemed a little too good to be true, questioning whether they were in fact output by some generative tool.

Every email that lands in my inbox is like a little mini Turing test. Did a human write this?

Our guard is up. Our filters are activated. Our default mode is suspicion.

This is most apparent with web search. We’ve always needed to filter search results through our own personal lenses, but now it’s like playing whack-a-mole. First we have to find workarounds for avoiding slop, and then when we click through to a web page, we have to evaluate whether’s it’s been generated by some SEO spammer making full use of the new breed of content-production tools.

There’s been a lot of hand-wringing about how this could spell doom for the web. I don’t think that’s necessarily true. It might well spell doom for web search, but I’m okay with that.

Back before its enshittification—an enshittification that started even before all the recent AI slop—Google solved the problem of accurate web searching with its PageRank algorithm. Before that, the only way to get to trusted information was to rely on humans.

Humans made directories like Yahoo! or DMOZ where they categorised links. Humans wrote blog posts where they linked to something that they, a human, vouched for as being genuinely interesting.

There was life before Google search. There will be life after Google search.

Look, there’s even a new directory devoted to cataloging blogs: websites made by humans. Life finds a way.

All of the spam and slop that’s making us so suspicious may end up giving us a new appreciation for human curation.

It wouldn’t be a straightforward transition to move away from search. It would be uncomfortable. It would require behaviour change. People don’t like change. But when needs must, people adapt.

The first bit of behaviour change might be a rediscovery of bookmarks. It used to be that when you found a source you trusted, you bookmarked it. Browsers still have bookmarking functionality but most people rely on search. Maybe it’s time for a bookmarking revival.

A step up from that would be using a feed reader. In many ways, a feed reader is a collection of bookmarks, but all of the bookmarks get polled regularly to see if there are any updates. I love using my feed reader. Everything I’ve subscribed to in there is made by humans.

The ultimate bookmark is an icon on the homescreen of your phone or in the dock of your desktop device. A human source you trust so much that you want it to be as accessible as any app.

Right now the discovery mechanism for that is woeful. I really want that to change. I want a web that empowers people to connect with other people they trust, without any intermediary gatekeepers.

The evangelists of large language models (who may coincidentally have invested heavily in the technology) like to proclaim that a slop-filled future is inevitable, as though we have no choice, as though we must simply accept enshittification as though it were a force of nature.

But we can always walk away.

InstAI

If you use Instagram, there may be a message buried in your notifications. It begins:

We’re getting ready to expand our AI at Meta experiences to your region.

Fuck that. Here’s the important bit:

To help bring these experiences to you, we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta. This means that you have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied going forwards.

Follow that link and fill in the form. For the field labelled “Please tell us how this processing impacts you” I wrote:

It’s fucking rude.

That did the trick. I got an email saying:

We’ve reviewed your request and will honor your objection.

Mind you, there’s still this:

We may still process information about you to develop and improve AI at Meta, even if you object or don’t use our products and services.

Switching costs

Cory has published the transcript of his talk at the Transmediale festival in Berlin. It’s all about enshittification, and what we can collectively do to reverse it.

He succinctly describes the process of enshittification like this:

First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

More importantly, he describes the checks and balances that keep enshittification from happening, all of which have been dismantled over time: competition, regulation, self-help, and workers.

One of the factors that allows enshittification to proceed is a high switching cost:

Switching costs are everything you have to give up when you leave a product or service. In Facebook’s case, it was all the friends there that you followed and who followed you. In theory, you could have all just left for somewhere else; in practice, you were hamstrung by the collective action problem.

It’s hard to get lots of people to do the same thing at the same time.

We’ve seen this play out over at Twitter, where people I used to respect are still posting there as if it hasn’t become a cesspool of far-right racist misogyny reflecting its new owner’s values. But for a significant amount of people—including myself and anyone with a modicum of decency—the switching cost wasn’t enough to stop us getting the hell out of there. Echoing Robin’s observation, Cory says:

…the difference between “I hate this service but I can’t bring myself to quit it,” and “Jesus Christ, why did I wait so long to quit? Get me the hell out of here!” is razor thin.

If users can’t leave because everyone else is staying, when when everyone starts to leave, there’s no reason not to go, too.

That’s terminal enshittification, the phase when a platform becomes a pile of shit. This phase is usually accompanied by panic, which tech bros euphemistically call ‘pivoting.’

Anyway, I bring this up because I recently read something else about switching costs, but in a very different context. Jake Lazaroff was talking about JavaScript frameworks:

I want to talk about one specific weakness of JavaScript frameworks: interoperability, or the lack thereof. Almost without exception, each framework can only render components written for that framework specifically.

As a result, the JavaScript community tends to fragment itself along framework lines. Switching frameworks has a high cost, especially when moving to a less popular one; it means leaving most of the third-party ecosystem behind.

That switching cost stunts framework innovation by heavily favoring incumbents with large ecosystems.

Sounds a lot like what Cory was describing with incumbents like Google, Facebook, Twitter, and Amazon.

And let’s not kid ourselves, when we’re talking about incumbent client-side JavaScript frameworks, we might mention Vue or some other contender, but really we’re talking about React.

React has massive switching costs. For over a decade now, companies have been hiring developers based on one criterion: do they know React?

“An expert in CSS you say? No thanks.”

“Proficient in vanilla JavaScript? Don’t call us, we’ll call you.”

Heck, if I were advising someone who was looking for a job in front-end development (as opposed to actually being good at front-end development; two different things), I’d tell them to learn React.

Just as everyone ended up on Facebook because everyone was on Facebook, everyone ended up using React because everyone was using React.

You can probably see where I’m going with this: the inevitable enshittification of React.

Just to be clear, I’m not talking about React getting shittier in terms of what it does. It’s always been a shitty technology for end users:

React is legacy tech from 2013 when browsers didn’t have template strings or a BFCache.

No, I’m talking about the enshittification of the developer experience …the developer experience being the thing that React supposedly has going for it, though as Simon points out, the developer experience has always been pretty crap:

Whether on purpose or not, React took advantage of this situation by continuously delivering or promising to deliver changes to the library, with a brand new API being released every 12 to 18 months. Those new APIs and the breaking changes they introduce are the new shiny objects you can’t help but chase. You spend multiple cycles learning the new API and upgrading your application. It sure feels like you are doing something, but in reality, you are only treading water.

Well, it seems like the enshittification of the React ecosystem is well underway. Cassidy is kind of annoyed at React. Tom is increasingly miffed about the state of React releases, and Matteo asks React, where are you going?

Personally, I would love it if more people were complaining about the dreadful user experience inflicted by client-side React. Instead the complaints are universally about the developer experience.

I guess doing the right thing for the wrong reasons is fine. It’s just a little dispiriting.

I sometimes feel like I’m living that old joke, where I’m the one in the restaurant saying “the food here is terrible!” and most of my peers are saying “I know! And such small portions!”

Permission

Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?

Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.

What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?

Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.

In short order, search came to rule the web. And Google came to rule search.

The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.

Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.

That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).

Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.

With AI, tech has broken the web’s social contract:

Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.

As Chris puts it:

Me, I just think it’s fuckin’ rude.

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

Ben proposes an update to robots.txt that would allow us to specify licensing information:

Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.

Like crawl my site only to provide search results not train your LLM.

It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.

Or do they?

There is still the nuclear option in robots.txt:

User-agent: Googlebot
Disallow: /

That’s what Vasilis is doing:

I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.

The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.

I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.

And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”

The situation with Google reminds me of what Robin said about Twitter:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

We can rescind our invitation to Google.