Journal tags: search

28

Filters

My phone rang today. I didn’t recognise the number so although I pressed the big button to answer the call, I didn’t say anything.

I didn’t say anything because usually when I get a call from a number I don’t know, it’s some automated spam. If I say nothing, the spam voice doesn’t activate.

But sometimes it’s not a spam call. Sometimes after a few seconds of silence a human at the other end of the call will say “Hello?” in an uncertain tone. That’s the point when I respond with a cheery “Hello!” of my own and feel bad for making this person endure those awkward seconds of silence.

Those spam calls have made me so suspicious that real people end up paying the price. False positives caught in my spam-detection filter.

Now it’s happening on the web.

I wrote about how Google search, Bing, and Mozilla Developer network are squandering trust:

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

But it’s not just limited to specific companies. I’ve noticed more and more suspicion related to any online activity.

I’ve seen members of a community site jump to the conclusion that a new member’s pattern of behaviour was a sure sign that this was a spambot. But it could just as easily have been the behaviour of someone who isn’t neurotypical or who doesn’t speak English as their first language.

Jessica was looking at some pictures on an AirBnB listing recently and found herself examining some photos that seemed a little too good to be true, questioning whether they were in fact output by some generative tool.

Every email that lands in my inbox is like a little mini Turing test. Did a human write this?

Our guard is up. Our filters are activated. Our default mode is suspicion.

This is most apparent with web search. We’ve always needed to filter search results through our own personal lenses, but now it’s like playing whack-a-mole. First we have to find workarounds for avoiding slop, and then when we click through to a web page, we have to evaluate whether’s it’s been generated by some SEO spammer making full use of the new breed of content-production tools.

There’s been a lot of hand-wringing about how this could spell doom for the web. I don’t think that’s necessarily true. It might well spell doom for web search, but I’m okay with that.

Back before its enshittification—an enshittification that started even before all the recent AI slop—Google solved the problem of accurate web searching with its PageRank algorithm. Before that, the only way to get to trusted information was to rely on humans.

Humans made directories like Yahoo! or DMOZ where they categorised links. Humans wrote blog posts where they linked to something that they, a human, vouched for as being genuinely interesting.

There was life before Google search. There will be life after Google search.

Look, there’s even a new directory devoted to cataloging blogs: websites made by humans. Life finds a way.

All of the spam and slop that’s making us so suspicious may end up giving us a new appreciation for human curation.

It wouldn’t be a straightforward transition to move away from search. It would be uncomfortable. It would require behaviour change. People don’t like change. But when needs must, people adapt.

The first bit of behaviour change might be a rediscovery of bookmarks. It used to be that when you found a source you trusted, you bookmarked it. Browsers still have bookmarking functionality but most people rely on search. Maybe it’s time for a bookmarking revival.

A step up from that would be using a feed reader. In many ways, a feed reader is a collection of bookmarks, but all of the bookmarks get polled regularly to see if there are any updates. I love using my feed reader. Everything I’ve subscribed to in there is made by humans.

The ultimate bookmark is an icon on the homescreen of your phone or in the dock of your desktop device. A human source you trust so much that you want it to be as accessible as any app.

Right now the discovery mechanism for that is woeful. I really want that to change. I want a web that empowers people to connect with other people they trust, without any intermediary gatekeepers.

The evangelists of large language models (who may coincidentally have invested heavily in the technology) like to proclaim that a slop-filled future is inevitable, as though we have no choice, as though we must simply accept enshittification as though it were a force of nature.

But we can always walk away.

Trust

In their rush to cram in “AI” “features”, it seems to me that many companies don’t actually understand why people use their products.

Google is acting as though its greatest asset is its search engine. Same with Bing.

Mozilla Developer Network is acting as though its greatest asset is its documentation. Same with Stack Overflow.

But their greatest asset is actually trust.

If I use a search engine I need to be able to trust that the filtering is good. If I look up documentation I need to trust that the information is good. I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”

“But”, the apologists will respond, “the results are mostly correct! The documentation is mostly true!”

Sure, but as Terence puts it:

The intern who files most things perfectly but has, more than once, tipped an entire cup of coffee into the filing cabinet is going to be remembered as “that klutzy intern we had to fire.”

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

I am honestly astonished that so many companies don’t seem to realise what they’re destroying.

UX London 2024, day one

UX London is just two months away!

The best way to enjoy the event is to go for all three days but if that’s not doable for you, each individual day is kind of like a mini-conference with its own theme.

The theme on day one, Tuesday, June 18th is design research.

In the morning there are four fantastic talks.

A bearded short-haired man with light skin smiling outdoors amongst greenery. A smiling woman with dark hair with a yellow flower in it wearing an orange top outdoors in a sunny pastoral setting. A portrait of Aleks outdoors turning the camera with a smile. A smiling light-skinned woman with medium length hair and a colourful green top in front of a stucco wall.
  1. Tom Kerwin kicks things off with his talk on pitch provocations. Tom will show you how to probe for what the market really wants with his fast, counterintuitive method.
  2. Clarissa Gardner is giving a talk about ethics and safeguarding in UX research . Clarissa will show you how to uphold good practice without compromising delivery in a fast-paced environment.
  3. Aleks Melnikova’s talk is all about demystifying inclusive research. Aleks will show you how to conduct research for a diverse range of participants, from recruitment and planning through to moderation and analysis.
  4. Emma Boulton closes out the morning with her talk on meeting Product where they are. Emma will show you how to define a knowledge management strategy for your organisation so that you can retake your seat at the table.

After lunch you’ll take part in one of four workshops. Choose wisely!

A white man with short hair and a bit of a ginger beard with a twinkle in his eye, wearing a plaid shirt. A fair-skinned woman with long hair smiling. Close up of a smiling light-skinned woman wearing glasses with long red hair. A man with short hair and glasses with a neutral expression on his face.
  1. Luke Hay is running a workshop on bridging the gap between research and design. Luke will show you how to take practical steps to ensure that designers and researchers are working as a seamless team.
  2. Serena Verdenicci is running a workshop on behaviorual intentions. Serena will show you how to apply a behavioural mindset to your work so you can create behaviour-change interventions.
  3. Stéphanie Walter is running a workshop on designing adaptive reusable components and pages. Stéphanie will show you how to plan your content and information architecture to help build more reusable components.
  4. Ben Sauer is running a workshop on the storytelling bridge. Ben will show you how to find your inner storyteller to turn your insights into narratives your stakeholders can understand quickly and easily.

After your workshop there’s one final closing keynote to bring everyone back together. I’m keeping that secret for just a little longer, but trust me, it’s going to be inspiring—plenty to discuss at the drinks reception afterwards.

That’s quite a packed day. If design research is what you’re into, you won’t want to miss it. Get your ticket now.

Just to sweeten the deal and as a reward for reading all the way to the end, here’s a discount code you can use to get a whopping 20% off: JOINJEREMY.

Permission

Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?

Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.

What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?

Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.

In short order, search came to rule the web. And Google came to rule search.

The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.

Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.

That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).

Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.

With AI, tech has broken the web’s social contract:

Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.

As Chris puts it:

Me, I just think it’s fuckin’ rude.

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

Ben proposes an update to robots.txt that would allow us to specify licensing information:

Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.

Like crawl my site only to provide search results not train your LLM.

It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.

Or do they?

There is still the nuclear option in robots.txt:

User-agent: Googlebot
Disallow: /

That’s what Vasilis is doing:

I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.

The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.

I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.

And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”

The situation with Google reminds me of what Robin said about Twitter:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

We can rescind our invitation to Google.

Directory enquiries

I was talking to someone recently about a forgotten battle in the history of the early web. It was a battle between search engines and directories.

These days, when the history of the web is told, a whole bunch of services get lumped into the category of “competitors who lost to Google search”: Altavista, Lycos, Ask Jeeves, Yahoo.

But Yahoo wasn’t a search engine, at least not in the same way that Google was. Yahoo was a directory with a search interface on top. You could find what you were looking for by typing or you could zero in on what you were looking for by drilling down through a directory structure.

Yahoo wasn’t the only directory. DMOZ was an open-source competitor. You can still experience it at DMOZlive.com:

The official DMOZ.com site was closed by AOL on February 17th 2017. DMOZ Live is committed to continuing to make the DMOZ Internet Directory available on the Internet.

Search engines put their money on computation, or to use today’s parlance, algorithms (or if you’re really shameless, AI). Directories put their money on humans. Good ol’ information architecture.

It turned out that computation scaled faster than humans. Search won out over directories.

Now an entire generation has been raised in the aftermath of this battle. Monica Chin wrote about how this generation views the world of information:

Catherine Garland, an astrophysicist, started seeing the problem in 2017. She was teaching an engineering course, and her students were using simulation software to model turbines for jet engines. She’d laid out the assignment clearly, but student after student was calling her over for help. They were all getting the same error message: The program couldn’t find their files.

Garland thought it would be an easy fix. She asked each student where they’d saved their project. Could they be on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. “What are you talking about?” multiple students inquired. Not only did they not know where their files were saved — they didn’t understand the question.

Gradually, Garland came to the same realization that many of her fellow educators have reached in the past four years: the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students.

Dr. Saavik Ford confirms:

We are finding a persistent issue with getting (undergrad, new to research) students to understand that a file/directory structure exists, and how it works. After a debrief meeting today we realized it’s at least partly generational.

We live in a world ordered only by search:

While some are quite adept at using labels, tags, and folders to manage their emails, others will claim that there’s no need to do because you can easily search for whatever you happen to need. Save it all and search for what you want to find. This is, roughly speaking, the hot mess approach to information management. And it appears to arise both because search makes it a good-enough approach to take and because the scale of information we’re trying to manage makes it feel impossible to do otherwise. Who’s got the time or patience?

There are still hold-outs. You can prise files from Scott Jenson’s cold dead hands.

More recently, Linus Lee points out what we’ve lost by giving up on directory structures:

Humans are much better at choosing between a few options than conjuring an answer from scratch. We’re also much better at incrementally approaching the right answer by pointing towards the right direction than nailing the right search term from the beginning. When it’s possible to take a “type in a query” kind of interface and make it more incrementally explorable, I think it’s almost always going to produce a more intuitive and powerful interface.

Directory structures still make sense to me (because I’m old) but I don’t have a problem with search. I do have a problem with systems that try to force me to search when I want to drill down into folders.

I have no idea what Google Drive and Dropbox are doing but I don’t like it. They make me feel like the opposite of a power user. Trying to find a file using their interfaces makes me feel like I’m trying to get a printer to work. Randomly press things until something happens.

Anyway. Enough fist-shaking from me. I’m going to ponder Linus’s closing words. Maybe defaulting to a search interface is a cop-out:

Text search boxes are easy to design and easy to add to apps. But I think their ease on developers may be leading us to ignore potential interface ideas that could let us discover better ideas, faster.

Even more UX London speaker updates

I’ve added five more faces to the UX London line-up.

Irina Rusakova will be giving a talk on day one, the day that focuses on research. Her talk on designing with the autistic community is one I’m really looking forward to.

Also on day one, my friend and former Clearleftie Cennydd Bowles will be giving a workshop called “What could go wrong?” He literally wrote the book on ethical design.

Day two is all about creation. My co-worker Chris How will be speaking. “Nepotism!” you cry! But no, Chris is speaking because I had the chance to his talk—called “Unexpectedly obvious”—and I thought “that’s perfect for UX London!”:

Let him take you on a journey through time and across the globe sharing stories of designs that solve problems in elegant if unusual ways.

Also on day two, you’ve got two additional workshops. Lou Downe will be running a workshop on designing good services, and Giles Turnbull will be running a workshop called “Writing for people who hate writing.”

I love that title! Usually when I contact speakers I don’t necessarily have a specific talk or workshop in mind, but I knew that I wanted that particular workshop from Giles.

When I wrote to Giles to ask come and speak, I began by telling how much I enjoy his blog—I’m a long-time suscriber to his RSS feed. He responded and said that he also reads my blog—we’re blog buddies! (That’s a terrible term but there should be a word for people who “know” each other only through reading each other’s websites.)

Anyway, that’s another little treasure trove of speakers added to the UX London roster:

That’s nineteen speakers already and we’re not done yet—expect further speaker announcements soon. But don’t wait on those announcements before getting your ticket. Get yours now!

More UX London speaker updates

It wasn’t that long ago that I told you about some of the speakers that have been added to the line-up for UX London in June: Steph Troeth, Heldiney Pereira, Lauren Pope, Laura Yarrow, and Inayaili León. Well, now I’ve got another five speakers to tell you about!

Aleks Melnikova will be giving a workshop on day one, June 28th—that’s the day with a focus on research.

Stephanie Marsh—who literally wrote the book on user research—will also be giving a workshop that day.

Before those workshops though, you’ll get to hear a talk from the one and only Kat Zhou, the creator of Design Ethically. By the way, you can hear Kat talking about deceptive design in a BBC radio documentary.

Day two has a focus on content design so who better to deliver a workshop than Sarah Winters, author of the Content Design book.

Finally, on day three—with its focus on design systems—I’m thrilled to announce that Adekunle Oduye will be giving a talk. He too is an author. He co-wrote the Design Engineering Handbook. I also had the pleasure of talking to Adekunle for an episode of the Clearleft podcast on design engineering.

So that’s another five excellent speakers added to the line-up:

That’s a total of fifteen speakers so far with more on the way. And I’ll be updating the site with more in-depth descriptions of the talks and workshops soon.

If you haven’t yet got your ticket for UX London, grab one now. You can buy tickets for individual days, or to get the full experience and the most value, get a ticket for all three days.

UX London speaker updates

If you’ve signed up to the UX London newsletter then this won’t be news to you, but more speakers have been added to the line-up.

Steph Troeth will be giving a workshop on day one. That’s the day with a strong focus on research, and when it comes to design research, Steph is unbeatable. You can hear some of her words of wisdom in an episode of the Clearleft podcast all about design research.

Heldiney Pereira will be speaking on day two. That’s the day with a focus on content design. Heldiney previously spoke at our Content By Design event and it was terrific—his perspective on content design as a product designer is invaluable.

Lauren Pope will also be on day two. She’ll be giving a workshop. She recently launched a really useful content audit toolkit and she’ll be bringing that expertise to her UX London workshop.

Day three is going to have a focus on design systems (and associated disciplines like design engineering and design ops). Both Laura Yarrow and Inayaili León will be giving talks on that day. You can expect some exciting war stories from the design system trenches of HM Land Registry and GitHub.

I’ve got some more speakers confirmed but I’m going to be a tease and make you wait a little longer for those names. But check out the line-up so far! This going to be such an excellent event (I know I’m biased, but really, look at that line-up!).

June 28th to 30th. Tobacco Dock, London. Get your ticket if you haven’t already.

Curating UX London 2022

The first speakers are live on the UX London 2022 site! There are only five people announced for now—just enough to give you a flavour of what to expect. There will be many, many more.

Putting together the line-up of a three-day event is quite challenging, but kind of fun too. On the one hand, each day should be able to stand alone. After all, there are one-day tickets available. On the other hand, it should feel like one cohesive conference, not three separate events.

I’ve decided to structure the three days to somewhat mimic the design process…

The first day is all about planning and preparation. This is like the first diamond in the double-diamond process: building the right thing. That means plenty of emphasis on research.

The second day is about creation and execution. It’s like that second diamond: building the thing right. This could cover potentially everything but this year the focus will be on content design.

The third day is like the third diamond in the double dia— no, wait. The third day is about growing, scaling, and maintaining design. That means there’ll be quite an emphasis on topics like design systems and design engineering, maybe design ops.

But none of the days will be exclusively about a single topic. There are evergreen topics that apply throughout the process: product design, design ethics, inclusive design.

It’s a lot to juggle! But I’m managing to overcome choice paralysis and assemble a very exciting line-up indeed. Trust me—you won’t want to miss this!

Early bird tickets are available until February 28th. That’s just a few days away. I recommend getting your tickets now—you won’t regret it!

Quite a few people are bringing their entire teams, which is perfect. UX London can be both an educational experience and a team-bonding exercise. Let’s face it, it’s been too long since any of us have had a good off-site.

If you’re one of those lucky people who’s coming along (or if you’re planning to), I’m curious: given the themes mentioned above, are there specific topics that you’d hope to see covered? Drop me a line and let me know.

Also, if you read the description of the event and think “Oh, I know the perfect speaker!” then I’d love to hear from you. Maybe that speaker is you. (Although, cards on the table; if you look like me—another middle-age white man—I may take some convincing.)

Right. Time to get back to my crazy wall of conference curation.

Priority of design inputs

As you may already know, I’m a nerd for design principles. I collect them. I did a podcast episode on them. I even have a favourite design principle. It’s from the HTML design principles. The priority of constituencies:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

It’s all about priorities, see?

Prioritisation isn’t easy, and it gets harder the more factors come into play: user needs, business needs, technical constraints. But it’s worth investing the time to get agreement on the priority of your constituencies. And then formulate that agreement into design principles.

Jason is also a fan of the priority of constituencies. He recently wrote about applying it to design systems and came up with this:

User needs come before the needs of component consumers, which come before the needs of component developers, which come before the needs of the design system team, which come before theoretical purity.

That got me thinking about how this framing could be applied to other areas, like design.

Designers are used to juggling different needs (or constituencies); user needs, business needs, and so on. But what I’m interested in is how designers weigh up different inputs into the design process.

The obvious inputs are the insights you get from research. But even that can be divided into different categories. There’s qualitative research (talking to people) and qualitative research (sifting through numbers). Which gets higher priority?

There are other inputs too. Take best practices. If there’s a tried and tested solution to a problem, should that take priority over something new and untested? Maybe another way of phrasing it is to call it experience (whether that’s the designer’s own experience or the collective experience of the industry).

And though we might not like to acknowledge it because it doesn’t sound very scientific, gut instinct is another input into the design process. Although maybe that’s also related to experience.

Finally, how do you prioritise stakeholder wishes? What do you do if the client or the boss wants something that conflicts with user needs?

I could imagine a priority of design inputs that looks like this:

Qualitative research over quantitative research over stakeholder wishes over best practices over gut instinct.

But that could change over time. Maybe an experienced designer can put their gut instinct higher in the list, overruling best practices and stakeholder wishes …and maybe even some research insights? I don’t know.

I’ve talked before about how design principles should be reversible in a different context. The original priority of constituencies, for example, applies to HTML. But if you were to invert it, it would work for XML. Different projects have different priorities.

I could certainly imagine company cultures where stakeholder wishes take top billing. There are definitely companies that value qualitative research (data and analytics) above qualitative research (user interviews), and vice-versa.

Is a priority of design inputs something that should change from project to project? If so, maybe it would be good to hammer it out in the discovery phase so everyone’s on the same page.

Anyway, I’m just thinking out loud here. This is something I should chat more about with my colleagues to get their take.

Design research on the Clearleft podcast

We’re halfway through the third season of the Clearleft podcast already!

Episode three is all about design research. I like the narrative structure of this. It’s a bit like a whodunnit, but it’s more like a whydunnit. The “why” question is “why aren’t companies hiring more researchers?”

The scene of the crime is this year’s UX Fest, where talks by both Teresa Torres and Gregg Bernstein uncovered the shocking lack of researchers. From there, I take up the investigation with Maite Otondo and Stephanie Troeth.

I won’t spoil it but by the end there’s an answer to the mystery.

I learned a lot along the way too. I realised how many axes of research there are. There’s qualitative research (stories, emotion, and context) and then there’s quantitative research (volume and data). But there’s also evualative research (testing a hyphothesis) and generative research (exploring a problem space before creating a solution). By my count that gives four possible combos: qualitative evaluative research, quantitative evaluative research, qualitative generative research, and quantitative generative research. Phew!

Steph was a terrific guest. Only a fraction of our conversation made it into the episode, but we chatted for ages.

And Maite kind of blew my mind too, especially when she was talking about the relationship between research and design and she said:

Research is about the present and design is about the future.

🤯

I’m going to use that quote again in a future episode. In fact, this episode on design research leads directly into the next two episodes. You won’t want to miss them. So if you’re not already subscribed to the Clearleft podcast, you should get on that, whether it’s via the RSS feed, Apple, Google, Spotify, Overcast, or wherever you get your podcasts from.

Have a listen to this episode on design research and if you’re a researcher yourself, remember that unlike most companies we value research at Clearleft and that’s why we’re hiring another researcher right now. Come and work with us!

Ampvisory

I was very inspired by something Terence Eden wrote on his blog last year. A report from the AMP Advisory Committee Meeting:

I don’t like AMP. I think that Google’s Accelerated Mobile Pages are a bad idea, poorly executed, and almost-certainly anti-competitive.

So, I decided to join the AC (Advisory Committee) for AMP.

Like Terence, I’m not a fan of Google AMP—my initially positive reaction to it soured over time as it became clear that Google were blackmailing publishers by privileging AMP pages in Google Search. But all I ever did was bitch and moan about it on my website. Terence actually did something.

So this year I put myself forward as a candidate for the AMP advisory committee. I have no idea how the election process works (or who does the voting) but thanks to whoever voted for me. I’m now a member of the AMP advisory committee. If you look at that blog post announcing the election results, you’ll see the brief blurb from everyone who was voted in. Most of them are positively bullish on AMP. Mine is not:

Jeremy Keith is a writer and web developer dedicated to an open web. He is concerned that AMP is being unfairly privileged by Google’s search engine instead of competing on its own merits.

The good news is that main beef with AMP is already being dealt with. I wanted exactly what Terence said:

My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.

That’s happening as of May of this year. Just as well—the AMP advisory committee have absolutely zero influence on Google search. I’m not sure how much influence we have at all really.

This is an interesting time for AMP …whatever AMP is.

See, that’s been a problem with Google AMP from the start. There are multiple defintions of what AMP is. At the outset, it seemed pretty straightforward. AMP is a format. It has a doctype and rules that you have to meet in order to be “valid” AMP. Part of that ruleset involved eschewing HTML elements like img and video in favour of web components like amp-img and amp-video.

That messaging changed over time. We were told that AMP is the collection of web components. If that’s the case, then I have no problem at all with AMP. People are free to use the components or not. And if the project produces performant accessible web components, then that’s great!

But right now it’s not at all clear which AMP people are talking about, even in the advisory committee. When we discuss improving AMP, do we mean the individual components or the set of rules that qualify an AMP page being “valid”?

The use-case for AMP-the-format (as opposed to AMP-the-library-of-components) was pretty clear. If you were a publisher and you wanted to appear in the top stories carousel in Google search, you had to publish using AMP. Just using the components wasn’t enough. Your pages had to be validated as AMP-the-format.

That’s no longer the case. From May, pages that are fast enough will qualify for the top stories carousel. What will publishers do then? Will they still maintain separate AMP-the-format pages? Time will tell.

I suspect publishers will ditch AMP-the-format, although it probably won’t happen overnight. I don’t think anyone likes being blackmailed by a search engine:

An engineer at a major news publication who asked not to be named because the publisher had not authorized an interview said Google’s size is what led publishers to use AMP.

The pre-rendering (along with the lightning bolt) that happens for AMP pages in Google search might be a reason for publishers to maintain their separate AMP-the-format pages. But I suspect publishers don’t actually think the benefits of pre-rendering outweigh the costs: pre-rendered AMP-the-format pages are served from Google’s servers with a Google URL. If anything, I think that publishers will look forward to having the best of both worlds—having their pages appear in the top stories carousel, but not having their pages hijacked by Google’s so-called-cache.

Does AMP-the-format even have a future without Google search propping it up? I hope not. I think it would make everything much clearer if AMP-the-format went away, leaving AMP-the-collection-of-components. We’d finally see these components being evaluated on their own merits—usefulness, performance, accessibility—without unfair interference.

So my role on the advisory committee so far has been to push for clarification on what we’re supposed to be advising on.

I think it’s good that I’m on the advisory committee, although I imagine my opinions could easily be be dismissed given my public record of dissent. I may well be fooling myself though, like those people who go to work at Facebook and try to justify it by saying they can accomplish more from inside than outside (or whatever else they tell themselves to sleep at night).

The topic I’ve volunteered to help with is somewhat existential in nature: what even is AMP? I’m happy to spend some time on that. I think it’ll be good for everyone to try to get that sorted, regardless about how you feel about the AMP project.

I have no intention of giving any of my unpaid labour towards the actual components themselves. I know AMP is theoretically open source now, but let’s face it, it’ll always be perceived as a Google-led project so Google can pay people to work on it.

That said, I’ve also recently joined a web components community group that Lea instigated. Remember she wrote that great blog post recently about the failed promise of web components? I’m not sure how much I can contribute to the group (maybe some meta-advice on the nature of good design principles?) but at the very least I can serve as a bridge between the community group and the AMP advisory committee.

After all, AMP is a collection of web components. Maybe.

Making Research Count by Cyd Harrell

The brilliant Cyd Harrell is opening up day two of An Event Apart in Chicago. I’m going to attempt to liveblog her talk on making research count…

Research gets done …and then sits in a report, gathering dust.

Research matters. But how do we make it count? We need allies. Maybe we need more money. Perhaps we need more participation from people not on the product team.

If you’re doing real research on a schedule, sharing it on a regular basis, making people’s eyes light up …then you’ve won!

Research counts when it answers questions that people care about. But you probably don’t want to directly ask “Hey, what questions do you want answered?”

Research can explain oddities in analytics weird feedback from customers, unexpected uses of products, and strange hunches (not just your own).

Curious people with power are the most useful ones to influence. Not just hierarchical power. Engineers often have a lot of power. So ask, “Who is the most curious engineer, and how can I drag them out on a research session with me?”

At 18F, Cyd found that a lot of the nodes of power were in the mid level of the organisation who had been there a while—they know a lot of people up and down the chain. If you can get one of those people excited about research, they can spread it.

Open up your practice. Demystify it. Put as much effort into communicating as into practicing. Create opportunities for people to ask questions and learn.

You can think about communities of practice in the obvious way: people who do similar things to us, and other people who make design decisions. But really, everyone in the organisation is affected by design decisions.

Cyd likes to do office hours. People can come by and ask questions. You could open a Slack channel. You can run brown bag lunches to train people in basic user research techniques. In more conventional organisations, a newsletter is a surprisingly effective tool for sharing the latest findings from research. And use your walls to show work in progress.

Research counts when people can see it for themselves—not just when it’s reported from afar. Ask yourself: who in your organisation is disconnected from their user? It’s difficult for people to maintain their motivation in that position.

When someone has been in the field with you, the data doesn’t have to be explained.

Whoever’s curious. Whoever’s disconnected. Invite them along. Show them what you’re doing.

Think about the qualities of a good invitation (for a party, say). Make the rules clear. Make sure they want to come back. Design the experience of observing research. Make sure everyone has tools. Give everyone a responsibility. Be like Willy Wonka—he gave clear rules to the invitied guests. And sure, things didn’t go great when people broke the rules, but at the end, everyone still went home with the truckload of chocolate they were promised.

People who get to ask a question buy in to the results. Those people feel a sense of ownership for the research.

Research counts when methods fit the question. Think about what the right question is and how you might go about answering it.

You can mix your methods. Interviews. Diary studies. Card sorting. Shadowing. You can ground the user research in competitor analysis.

Back in 2008, Cyd was contacted by a company who wanted to know: how do people really use phones in their cars? Cyd’s team would ride along with people, interviewing them, observing them, taking pictures and video.

Later at the federal government, Cyd was asked: what are the best practices for government digital transformation? How to answer that? It’s so broad! Interviews? Who knows what?

They refined the question: what makes modern digital practices stick within a government entity? They looked at what worked when companies were going online, so see if there was anything that government could learn from. Then they created a set of really focused interview questions. What does digital transformation mean? How do you know when you’re done? What are the biggest obstacles to this work? How do you make changes last?

They used atechnique called cluster recruiting to figure out who else to talk to (by asking participants who else they should be talking to).

There is no one research method that will always work for you. Cutting the right corners at the right time lets you be fast and cheap. Cyd’s bare-bones research kit costs about $20: a notebook, a pen, a consent form, and the price of a cup of coffee. She also created a quick score sheet for when she’s not in a position to have research transcribed.

Always label your assumptions before beginning your research. Maybe you’re assuming that something is a frustrating experience that needs fixing, but it might emerge that it doesn’t need fixing—great! You’ve just saved a whole lotta money.

Research counts when researchers tell the story well. Synthesis works best as a conversational practice. It’s hard to do by yourself. You start telling stories when you come back from the field (sometimes it starts when you’re still out in the field, talking about the most interesting observations).

Miller’s Law is a great conceptual framework:

To understand what another person is saying, you must assume that it is true and try to imagine what it could be true of.

You’re probably familiar with the “five whys”. What about the “five ways”? If people talk about something five different ways, it’s virtually certain that one of them will be an apt metaphor. So ask “Can you say that in a different way?” five time.

Spend as much time on communicating outcomes as you did on executing the work.

After research, play back how many people you spoke to, the most valuable insight you gained, the themes that are emerging. Describe the question you wanted to answer, what answers you got, and what you’re going to do next. If you’re in an organisation that values memos, write a memo. Or you could make a video. Or you could write directly into backlog tickets. And don’t forget the wall work! GDS have wonderfully full walls in their research department.

In the end, the best tool for research is an illuminating story.

Cyd was doing research at the Bakersfield courthouse. The hypothesis was that a lot of people weren’t engaging with technology in the court system. She approached a man named Manuel who was positively quaking. He was going through a custody battle. He said, “I don’t know technology but it doesn’t scare me. I’m shaking because this paperwork just gets to me—it’s terrifying.” He said who would gladly pay for someone to help him with the paperwork. Cyd wrote a report on this story. Months later, they heard people in the organisation asking questions like “How would this help Manuel?”

Sometimes you do have to fight (nicely).

People will push back on the time spent on research—they’ll say it doesn’t fit the sprint plan. You can have a three day research plan. Day 1: write scripts. Day 2: go to the users and talk to them. Day 3: play it back. People on a project spend more time than that in Slack.

People will say you can’t talk to the customers. In that situation, you could talk to people who are in the same sector as your company’s customers.

People will question the return on investment for research. Do it cheaply and show the very low costs. Then people stop talking about the money and start talking about the results.

People will claim that qualitative user research is not statistically significant. That’s true. But research is something else. It answers different question.

People will question whether a senior person needs to be involved. It is not fair to ask the intern to do all the work involved in research.

People will say you can’t always do research. But Cyd firmly believes that there’s always room for some research.

  • Make allies in customer research.
  • Find the most curious engineer on the team, go to lunch with them, and feed them the most interesting research insights.
  • Record a pain point and a send a video to executives.
  • If there’s really no budget, maybe you can get away with not paying incentives, but perhaps you can provide some other swag instead.

One of the best things you can do is be there, non-judgementally, making friends. It takes time, but it works. Research is like a dandelion in flight. Once it’s out and about, taking root, the more that research counts.

Opening up the AMP cache

I have a proposal that I think might alleviate some of the animosity around Google AMP. You can jump straight to the proposal or get some of the back story first…

The AMP format

Google AMP is exactly the kind of framework I’d like to get behind. Unlike most front-end frameworks, its components take a declarative approach—no knowledge of JavaScript required. I think Lea’s excellent Mavo is the only other major framework that takes this inclusive approach. All the configuration happens in markup, and all the styling happens in CSS. Excellent!

But I cannot get behind AMP.

Instead of competing on its own merits, AMP is unfairly propped up by the search engine of its parent company, Google. That makes it very hard to evaluate whether AMP is being used on its own merits. Instead, the evidence suggests that most publishers of AMP pages are doing so because they feel they have to, rather than because they want to. That’s a real shame, because as a library of web components, AMP seems pretty good. But there’s just no way to evaluate AMP-the-format without taking into account AMP-the-ecosystem.

The AMP ecosystem

Google AMP ostensibly exists to make the web faster. Initially the focus was specifically on mobile performance, but that distinction has since fallen by the wayside. The idea is that by using AMP’s web components, your pages will be speedy. Though, as Andy Davies points out, this isn’t always the case:

This is where I get confused… https://independent.co.uk only have an AMP site yet it’s performance is awful from a user perspective - isn’t AMP supposed to prevent this?

See also: Google AMP lowered our page speed, and there’s no choice but to use it:

According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87. The non-AMP versions? 95.

Publishers who already have fast web pages—like The Guardian—are still compelled to make AMP versions of their stories because of the search benefits reserved for AMP. As Terence Eden reported from a meeting of the AMP advisory committee:

We heard, several times, that publishers don’t like AMP. They feel forced to use it because otherwise they don’t get into Google’s news carousel — right at the top of the search results.

Some people felt aggrieved that all the hard work they’d done to speed up their sites was for nothing.

The Google AMP team are at pains to point out that AMP is not a ranking factor in search. That’s true. But it is unfairly privileged in other ways. Only AMP pages can appear in the Top Stories carousel …which appears above any other search results. As I’ve said before:

Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.

From A letter about Google AMP:

Content that “opts in” to AMP and the associated hosting within Google’s domain is granted preferential search promotion, including (for news articles) a position above all other results.

That’s not the only way that AMP pages get preferential treatment. It turns out that the secret to the speed of AMP pages isn’t the web components. It’s the prerendering.

The AMP cache

If you’ve ever seen an AMP page in a list of search results, you’ll have noticed the little lightning icon. If you’ve ever tapped on that search result, you’ll have noticed that the page loads blazingly fast!

That’s not down to AMP-the-format, alas. That’s down to the fact that the page has been prerendered by Google before you even went to it. If any page were prerendered that way, it would load blazingly fast. But currently, this privilege is reserved for AMP pages only.

If, after tapping through to that AMP page, you looked at the address bar of your browser, you might have noticed something odd. Even though you might have thought you were visiting The Washington Post, or The New York Times, the URL of the (blazingly fast) page you’re looking at is still under Google’s domain. That’s because Google hosts any AMP pages that it prerenders.

Google calls this “the AMP cache”, but it would be better described as “AMP hosting”. The web page sent down the wire is hosted on Google’s domain.

Here’s that AMP letter again:

When a user navigates from Google to a piece of content Google has recommended, they are, unwittingly, remaining within Google’s ecosystem.

Through gritted teeth, I will refer to this as “the AMP cache”, because that’s what everyone else calls it. But make no mistake, Google is hosting—not caching—these pages.

But why host the pages on a Google domain? Why not prerender the original URLs?

Prerendering and privacy

Scott summed up the situation with AMP nicely:

The pitch I think site owners are hearing is: let us host your pages on our domain and we’ll promote them in search results AND preload them so they feel “instant.” To opt-in, build pages using this component syntax.

But perhaps we could de-couple the AMP format from the AMP cache.

That’s what Terence suggests:

My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.

The AMP letter, too:

Instead of granting premium placement in search results only to AMP, provide the same perks to all pages that meet an objective, neutral performance criterion such as Speed Index.

Scott reiterates:

It’s been said before but it would be so good for the web if pages with a Lighthouse score over say, 90 could get into that top search result area, even if they’re not built using Google’s AMP framework. Feels wrong to have to rebuild/reproduce an already-fast site just for SEO.

This was also what I was calling for. But then Malte pointed out something that stumped me. Privacy.

Here’s the problem…

Let’s say Google do indeed prerender already-fast pages when they’re listed in search results. You, a search user, type something into Google. A list of results come back. Google begins pre-rendering some of them. But you don’t end up clicking through to those pages. Nonetheless, the servers those pages are hosted on have received a GET request coming from a Google search. Those publishers now know that a particular (cookied?) user could have clicked through to their site. That’s very different from knowing when someone has actually arrived at a particular site.

And that’s why Google host all the AMP pages that they prerender. Given the privacy implications of prerendering non-Google URLs, I must admit that I see their point.

Still, it’s a real shame to miss out on the speed benefit of prerendering:

Prerendering AMP documents leads to substantial improvements in page load times. Page load time can be measured in different ways, but they consistently show that prerendering lets users see the content they want faster. For now, only AMP can provide the privacy preserving prerendering needed for this speed benefit.

A modest proposal

Why is Google’s AMP cache just for AMP pages? (Y’know, apart from the obvious answer that it’s in the name.)

What if Google were allowed to host non-AMP pages? Google search could then prerender those pages just like it currently does for AMP pages. There would be no privacy leaks; everything would happen on the same domain—google.com or ampproject.org or whatever—just as currently happens with AMP pages.

Don’t get me wrong: I’m not suggesting that Google should make a 1:1 model of the web just to prerender search results. I think that the implementation would need to have two important requirements:

  1. Hosting needs to be opt-in.
  2. Only fast pages should be prerendered.

Opting in

Currently, by publishing a page using the AMP format, publishers give implicit approval to Google to host that page on Google’s servers and serve up this Google-hosted version from search results. This has always struck me as being legally iffy. I’ve looked in the AMP documentation to try to find any explicit granting of hosting permission (e.g. “By linking to this JavaScript file, you hereby give Google the right to serve up our copies of your content.”), but no luck. So even with the current situation, I think a clear opt-in for hosting would be beneficial.

This could be a meta element. Maybe something like:

<meta name="caches-allowed" content="google">

This would have the nice benefit of allowing comma-separated values:

<meta name="caches-allowed" content="google, yandex">

(The name is just a strawman, by the way—I’m not suggesting that this is what the final implementation would actually look like.)

If not a meta element, then perhaps this could be part of robots.txt? Although my feeling is that this needs to happen on a document-by-document basis rather than site-wide.

Many people will, quite rightly, never want Google—or anyone else—to host and serve up their content. That’s why it’s so important that this behaviour needs to be opt-in. It’s kind of appalling that the current hosting of AMP pages is opt-in-by-proxy-sort-of.

Criteria for prerendering

Which pages should be blessed with hosting and prerendering? The fast ones. That’s sorta the whole point of AMP. But right now, there’s a lot of resentment by people with already-fast websites who quite rightly feel they shouldn’t have to use the AMP format to benefit from the AMP ecosystem.

Page speed is already a ranking factor. It doesn’t seem like too much of a stretch to extend its benefits to hosting and prerendering. As mentioned above, there are already a few possible metrics to use:

  • Page Speed Index
  • Lighthouse
  • Web Page Test

Ah, but what if a page has good score when it’s indexed, but then gets worse afterwards? Not a problem! The version of the page that’s measured is the same version of the page that gets hosted and prerendered. Google can confidently say “This page is fast!” After all, they’re the ones serving up the page.

That does raise the question of how often Google should check back with the original URL to see if it has changed/worsened/improved. The answer to that question is however long it currently takes to check back in on AMP pages:

Each time a user accesses AMP content from the cache, the content is automatically updated, and the updated version is served to the next user once the content has been cached.

Issues

This proposal does not solve the problem with the address bar. You’d still find yourself looking at a page from The Washington Post or The New York Times (or adactio.com) but seeing a completely different URL in your browser. That’s not good, for all the reasons outlined in the AMP letter.

In fact, this proposal could potentially make the situation worse. It would allow even more sites to be impersonated by Google’s URLs. Where currently only AMP pages are bad actors in terms of URL confusion, opening up the AMP cache would allow equal opportunity URL confusion.

What I’m suggesting is definitely not a long-term solution. The long-term solutions currently being investigated are technically tricky and will take quite a while to come to fruition—web packages and signed exchanges. In the meantime, what I’m proposing is a stopgap solution that’s technically a lot simpler. But it won’t solve all the problems with AMP.

This proposal solves one problem—AMP pages being unfairly privileged in search results—but does nothing to solve the other, perhaps more serious problem: the erosion of site identity.

Measuring

Currently, Google can assess whether a page should be hosted and prerendered by checking to see if it’s a valid AMP page. That test would need to be widened to include a different measurement of performance, but those measurements already exist.

I can see how this assessment might not be as quick as checking for AMP validity. That might affect whether non-AMP pages could be measured quickly enough to end up in the Top Stories carousel, which is, by its nature, time-sensitive. But search results are not necessarily as time-sensitive. Let’s start there.

Assets

Currently, AMP pages can be prerendered without fetching anything other than the markup of the AMP page itself. All the CSS is inline. There are no initial requests for other kinds of content like images. That’s because there are no img elements on the page: authors must use amp-img instead. The image itself isn’t loaded until the user is on the page.

If the AMP cache were to be opened up to non-AMP pages, then any content required for prerendering would also need to be hosted on that same domain. Otherwise, there’s privacy leakage.

This definitely introduces an extra level of complexity. Paths to assets within the markup might need to be re-written to point to the Google-hosted equivalents. There would almost certainly need to be a limit on the number of assets allowed. Though, for performance, that’s no bad thing.

Make no mistake, figuring out what to do about assets—style sheets, scripts, and images—is very challenging indeed. Luckily, there are very smart people on the Google AMP team. If that brainpower were to focus on this problem, I am confident they could solve it.

Summary

  1. Prerendering of non-Google URLs is problematic for privacy reasons, so Google needs to be able to host pages in order to prerender them.
  2. Currently, that’s only done for pages using the AMP format.
  3. The AMP cache—and with it, prerendering—should be decoupled from the AMP format, and opened up to other fast web pages.

There will be technical challenges, but hopefully nothing insurmountable.

I honestly can’t see what Google have to lose here. If their goal is genuinely to reward fast pages, then opening up their AMP cache to fast non-AMP pages will actively encourage people to make fast web pages (without having to switch over to the AMP format).

I’ve deliberately kept the details vague—what the opt-in should look like; what the speed measurement should be; how to handle assets—I’m sure smarter folks than me can figure that stuff out.

I would really like to know what other people think about this proposal. Obviously, I’d love to hear from members of the Google AMP team. But I’d also love to hear from publishers. And I’d very much like to know what people in the web performance community think about this. (Write a blog post and send me a webmention.)

What am I missing here? What haven’t I thought of? What are the potential pitfalls (and are they any worse than the current acrimonious situation with Google AMP)?

I would really love it if someone with a fast website were in a position to say, “Hey Google, I’m giving you permission to host this page so that it can be prerendered.”

I would really love it if someone with a slow website could say, “Oh, shit! We’d better make our existing website faster or Google won’t host our pages for prerendering.”

And I would dearly love to finally be able to embrace AMP-the-format with a clear conscience. But as long as prerendering is joined at the hip to the AMP format, the injustice of the situation only harms the AMP project.

Google, open up the AMP cache.

AMPstinction

I’ve come to believe that the goal of any good framework should be to make itself unnecessary.

Brian said it explicitly of his PhoneGap project:

The ultimate purpose of PhoneGap is to cease to exist.

That makes total sense, especially if your code is a polyfill—those solutions are temporary by design. Autoprefixer is another good example of a piece of code that becomes less and less necessary over time.

But I think it’s equally true of any successful framework or library. If the framework becomes popular enough, it will inevitably end up influencing the standards process, thereby becoming dispensible.

jQuery is the classic example of this. There’s very little reason to use jQuery these days because you can accomplish so much with browser-native JavaScript. But the reason why you can accomplish so much without jQuery is because of jQuery. I don’t think we would have querySelector without jQuery. The library proved the need for the feature. The same is true for a whole load of DOM scripting features.

The same process is almost certain to occur with React—it’s a good bet there will be a standardised equivalent to the virtual DOM at some point.

When Google first unveiled AMP, its intentions weren’t clear to me. I hoped that it existed purely to make itself redundant:

As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask “Why can’t our regular pages be this fast?” By showing that there is life beyond big bloated invasive web pages, perhaps the AMP project will work as a demo of what the whole web could be.

Alas, as time has passed, that hope shows no signs of being fulfilled. If anything, I’ve noticed publishers using the existence of their AMP pages as a justification for just letting their “regular” pages put on weight.

Worse yet, the messaging from Google around AMP has shifted. Instead of pitching it as a format for creating parallel versions of your web pages, they’re now also extolling the virtues of having your AMP pages be the only version you publish:

In fact, AMP’s evolution has made it a viable solution to build entire websites.

On an episode of the Dev Mode podcast a while back, AMP was a hotly-debated topic. But even those defending AMP were doing so on the understanding that it was more a proof-of-concept than a long-term solution (and also that AMP is just for news stories—something else that Google are keen to change).

But now it’s clear that the Google AMP Project is being marketed more like a framework for the future: a collection of web components that prioritise performance …which is kind of odd, because that’s also what Google’s Polymer project is. The difference being that pages made with Polymer don’t get preferential treatment in Google’s search results. I can’t help but wonder how the Polymer team feels about AMP’s gradual pivot onto their territory.

If the AMP project existed in order to create a web where AMP was no longer needed, I think I could get behind it. But the more it’s positioned as the only viable solution to solving performance, the more uncomfortable I am with it.

Which, by the way, brings me to one of the most pernicious ideas around Google AMP—positioning anyone opposed to it as not caring about web performance. Nothing could be further from the truth. It’s precisely because performance on the web is so important that it deserves a long-term solution, co-created by all of us: not some commandents delivered to us from on-high by one organisation, enforced by preferential treatment by that organisation’s monopoly in search.

It’s the classic logical fallacy:

  1. Performance! Something must be done!
  2. AMP is something.
  3. Now something has been done.

By marketing itself as the only viable solution to the web performance problem, I think the AMP project is doing itself a great disservice. If it positioned itself as an example to be emulated, I would welcome it.

I wish that AMP were being marketed more like a temporary polyfill. And as with any polyfill, I look forward to the day when AMP is no longer necesssary.

I want AMP to become extinct. I genuinely think that the Google AMP team should share that wish.

Workshops

There’s a veritable smörgåsbord of great workshops on the horizon…

Clearleft presents a workshop with Jan Chipchase on field research in London on May 29th, and again on May 30th. The first day is sold out, but there are still tickets available for the second workshop (tickets are £654). If you’ve read Jan’s beautiful Field Study Handbook, then you’ll know what a great opportunity it is to spend a day in his company. But don’t dilly-dally—that second day is likely to sell out too.

This event is for product teams, designers, researchers, insights teams, in agencies, in-house, local and central government. People who are curious about human interaction, and their place in the world.

I’m really excited that Sarah and Val are finally bringing their web animation workshop to Brighton (I’ve been not-so-subtly suggesting that they do this for a while now). It’s a two day workshop on July 9th and 10th. There are still some tickets available, but probably not for much longer (tickets are £639). The workshop is happening at 68 Middle Street, the home of Clearleft.

This workshop will get you up and running with web animation in less time than it would take to read all the tutorials you have bookmarked. Over two days, you’ll go from beginner or novice web animator to having expert level knowledge of the current web animation landscape. You’ll get an in-depth look at animating with CSS, JavaScript, and SVG through hands-on exercises and learn the most efficient workflows for each.

A bit before that, though, there’s a one-off workshop on responsive web typography from Rich on Thursday, June 29th, also at 68 Middle Street. You can expect the same kind of brilliance that he demonstrated in his insta-classic Web Typography book, but delivered by the man himself.

You will learn how to combine centuries-old craft with cutting edge technology, including variable fonts, to design and develop for screens of all shapes and sizes, and provide the best reading experiences for your modern readers.

Whether you’re a designer or a developer, just starting out or seasoned pro, there will be plenty in this workshop to get your teeth stuck into.

Tickets are just £435, and best of all, that includes a ticket to the Ampersand conference the next day (standalone conference tickets are £235 so the workshop/conference combo is a real bargain). This year’s Ampersand is shaping up to be an unmissable event (isn’t it always?), so the workshop is like an added bonus.

See you there!

Links from a talk

In two weeks time, I’ll be in Seattle for An Event Apart. I’ll be giving a brand new talk. The title is The Way Of The Web (although perhaps a more accurate title would be The Layers Of The Web).

Here’s the description:

Do you ever get overwhelmed by the ever-changing nature of web design and development? Exhausting, isn’t it? How are you supposed to know which technologies and tools you should invest your time in? Will they stick around or will you just have to relearn everything in another few months? Join Jeremy as he takes a tour of the past, present, and future of working on the web. From the building blocks of HTML, CSS, and JavaScript through to frameworks and libraries right up to the latest and greatest Progressive Web Apps, this talk will examine our collective assumptions with a critical eye. By learning from the past, we can make sensible design decisions today to build the web of tomorrow.

There’s a direct evolution line from my previous talks—Resilence and Evaluating Technology—to this new one. (Spoiler: everything I talk about is in some way related to progressive enhancement …even if I never use the words “progressive" or “enhancement" in the talks.)

I’ve been preparing this new talk for months. It started with a mind map—an A3 sheet of paper with disconnected thoughts, like something from the scene in the crime movie where they enter the lair of the serial killer and find a crazy wall.

Then I set it aside and began procrastinating. But it was the good kind of procrastinating, right? I mean, I had made a start and all those thoughts were now bubbling around in my head.

Eventually I forced myself to put things in some sort of order and started creating slides. That’s the beginning of the horrible process bouncing between thinking “this is pretty good!” and “this is absolute crap!” To be honest, I never actually know if a talk is any good until I give it in front of an audience (practice runs at work are great for getting feedback but they’re not the same as doing the talk for real).

Anyway, I think the talk is ready to roll. If you see me giving this talk and you’re interested in diving deeper into the topics raised, I’ve gathered together some of sources I used.

Further Reading

Related posts on adactio.com

Progressive Web Apps

Books

Films

The meaning of AMP

Ethan quite rightly points out some semantic sleight of hand by Google’s AMP team:

But when I hear AMP described as an open, community-led project, it strikes me as incredibly problematic, and more than a little troubling. AMP is, I think, best described as nominally open-source. It’s a corporate-led product initiative built with, and distributed on, open web technologies.

But so what, right? Tom-ay-to, tom-a-to. Well, here’s a pernicious example of where it matters: in a recent announcement of their intent to ship a new addition to HTML, the Google Chrome team cited the mood of the web development community thusly:

Web developers: Positive (AMP team indicated desire to start using the attribute)

If AMP were actually the product of working web developers, this justification would make sense. As it is, we’ve got one team at Google citing the preference of another team at Google but representing it as the will of the people.

This is just one example of AMP’s sneaky marketing where some finely-shaved semantics allows them to appear far more reasonable than they actually are.

At AMP Conf, the Google Search team were at pains to repeat over and over that AMP pages wouldn’t get any preferential treatment in search results …but they appear in a carousel above the search results. Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.

The same semantic nit-picking can be found in their defence of caching. See, they’ve even got me calling it caching! It’s hosting. If I click on a search result, and I am taken to page that has a URL beginning with https://www.google.com/amp/s/... then that page is being hosted on the domain google.com. That is literally what hosting means. Now, you might argue that the original version was hosted on a different domain, but the version that the user gets sent to is the Google copy. You can call it caching if you like, but you can’t tell me that Google aren’t hosting AMP pages.

That’s a particularly low blow, because it’s such a bait’n’switch. One of the reasons why AMP first appeared to be different to Facebook Instant Articles or Apple News was the promise that you could host your AMP pages yourself. That’s the very reason I first got interested in AMP. But if you actually want the benefits of AMP—appearing in the not-search-results carousel, pre-rendered performance, etc.—then your pages must be hosted by Google.

So, to summarise, here are three statements that Google’s AMP team are currently peddling as being true:

  1. AMP is a community project, not a Google project.
  2. AMP pages don’t receive preferential treatment in search results.
  3. AMP pages are hosted on your own domain.

I don’t think those statements are even truthy, much less true. In fact, if I were looking for the right term to semantically describe any one of those statements, the closest in meaning would be this:

A statement used intentionally for the purpose of deception.

That is the dictionary definition of a lie.

Update: That last part was a bit much. Sorry about that. I know it’s a bit much because The Register got all gloaty about it.

I don’t think the developers working on the AMP format are intentionally deceptive (although they are engaging in some impressive cognitive gymnastics). The AMP ecosystem, on the other hand, that’s another story—the preferential treatment of Google-hosted AMP pages in the carousel and in search results; that’s messed up.

Still, I would do well to remember that there are well-meaning people working on even the fishiest of projects.

Except for the people working at the shitrag that is The Register.

(The other strong signal that I overstepped the bounds of decency was that this post attracted the pond scum of Hacker News. That’s another place where the “well-meaning people work on even the fishiest of projects” rule definitely doesn’t apply.)

Balance

This year’s Render conference just wrapped up in Oxford. It was a well-run, well-curated event, right up my alley: two days of a single track of design and development talks (see also: An Event Apart and Smashing Conference for other events in this mold that get it right).

One of my favourite talks was from Frances Ng. She gave a thoroughly entertaining account of her journey from aerospace engineer to front-end engineer, filled with ideas about how to get started, and keep from getting overwhelmed in the world of the web.

She recommended taking the time to occasionally dive deep into a foundational topic, pointing to another talk as a perfect example; Ana Balica gave a great presentation all about HTTP. The second half of the talk was about HTTP 2 and was filled with practical advice, but the first part was a thoroughly geeky history of the Hypertext Transfer Protocol, which I really loved.

While I’m mentoring Amber, we’ve been trying to find a good balance between those deep dives into the foundational topics and the hands-on day-to-day skills needed for web development. So far, I think we’ve found a good balance.

When Amber is ‘round at the Clearleft office, we sit down together and work on the practical aspects of HTML, CSS, and (soon) JavaScript. Last week, for example, we had a really great day diving into CSS selectors and specificity—I watched Amber’s knowledge skyrocket over the course of the day.

But between those visits—which happen every one or two weeks—I’ve been giving Amber homework of sorts. That’s where the foundational building blocks come in. Here are the questions I’ve asked so far:

  • What is the difference between the internet and the web?
  • What is the difference between GET and POST?
  • What are cookies?

The first question is a way of understanding the primacy of URLs on the web. Amber wrote about her research. The second question was getting at an understanding of HTTP. Amber wrote about that too. The third and current question is about state on the web. I’m looking forward to reading a write-up of that soon.

We’re still figuring out this whole mentorship thing but I think this balance of research and exercises is working out well.

In AMP we trust

AMP Conf was one of those deep dive events, with two days dedicated to one single technology: AMP.

Except AMP isn’t really one technology, is it? And therein lies the confusion. This was at the heart of the panel I was on. When we talk about AMP, we could be talking about one of three things:

  1. The AMP format. A bunch of web components. For instance, instead of using an img element on an AMP page, you use an amp-img element instead.
  2. The AMP rules. There’s one JavaScript file, hosted on Google’s servers, that turns those web components from spans into working elements. No other JavaScript is allowed. All your styles must be in a style element instead of an external file, and there’s a limit on what you can do with those styles.
  3. The AMP cache. The source of most confusion—and even downright enmity—this is what’s behind the fact that when you launch an AMP result from Google search, you don’t go to another website. You see Google’s cached copy of the page instead of the original.

The first piece of AMP—the format—is kind of like a collection of marginal gains. Where the img element might have some performance issues, the amp-img element optimises for perceived performance. But if you just used the AMP web components, it wouldn’t be enough to make your site blazingly fast.

The second part of AMP—the rules—is where the speed gains start to really show. You can’t have an external style sheet, and crucially, you can’t have any third-party scripts other than the AMP script itself. This is key to making AMP pages super fast. It’s not so much about what AMP does; it’s more about what it doesn’t allow. If you never used a single AMP component, but stuck to AMP’s rules disallowing external styles and scripts, you could easily make a page that’s even faster than what AMP can do.

At AMP Conf, Natalia pointed out that The Guardian’s non-AMP pages beat out the AMP pages for performance. So why even have AMP pages? Well, that’s down to the third, most contentious, part of the AMP puzzle.

The AMP cache turns the user experience of visiting an AMP page from fast to instant. While you’re still on the search results page, Google will pre-render an AMP page in the background. Not pre-fetch, pre-render. That’s why it opens so damn fast. It’s also what causes the most confusion for end users.

From my unscientific polling, the behaviour of AMP results confuses the hell out of people. The fact that the page opens instantly isn’t the problem—far from it. It’s the fact that you don’t actually go to an another page. Technically, you’re still on Google. An analogous mental model would be an RSS reader, or an email client: you don’t go to an item or an email; you view it in situ.

Well, that mental model would be fine if it were consistent. But in Google search, only some results will behave that way (the AMP pages) and others will behave just like regular links to other websites. No wonder people are confused! Some search results take them away and some search results keep them on Google …even though the page looks like a different website.

The price that we pay for the instantly-opening AMP pages from the Google cache is the URL. Because we’re looking at Google’s pre-rendered copy instead of the original URL, the address bar is not pointing to the site the browser claims to be showing. Everything in the body of the browser looks like an article from The Guardian, but if I look at the URL (which is what security people have been telling us for years is important to avoid being phished), then I’ll see a domain that is not The Guardian’s.

But wait! Couldn’t Google pre-render the page at its original URL?

Yes, they could. But they won’t.

This was a point that Paul kept coming back to: trust. There’s no way that Google can trust that someone else’s URL will play by the AMP rules (no external scripts, only loading embedded content via web components, limited styles, etc.). They can only trust the copies that they themselves are serving up from their cache.

By the way, there was a joint AMP/search panel at AMP Conf with representatives from both teams. As you can imagine, there were many questions for the search team, most of which were Glomar’d. But one thing that the search people said time and again was that Google was not hosting our AMP pages. Now I don’t don’t know if they were trying to make some fine-grained semantic distinction there, but that’s an outright falsehood. If I click on a link, and the URL I get taken to is a Google property, then I am looking at a page hosted by Google. Yes, it might be a copy of a document that started life somewhere else, but if Google are serving something from their cache, they are hosting it.

This is one of the reasons why AMP feels like such a bait’n’switch to me. When it first came along, it felt like a direct competitor to Facebook’s Instant Articles and Apple News. But the big difference, we were told, was that you get to host your own content. That appealed to me much more than having Facebook or Apple host the articles. But now it turns out that Google do host the articles.

This will be the point at which Googlers will say no, no, no, you can totally host your own AMP pages …but you won’t get the benefits of pre-rendering. But without the pre-rendering, what’s the point of even having AMP pages?

Well, there is one non-cache reason to use AMP and it’s a political reason. Beleaguered developers working for publishers of big bloated web pages have a hard time arguing with their boss when they’re told to add another crappy JavaScript tracking script or bloated library to their pages. But when they’re making AMP pages, they can easily refuse, pointing out that the AMP rules don’t allow it. Google plays the bad cop for us, and it’s a very valuable role. Sarah pointed this out on the panel we were on, and she was spot on.

Alright, but what about The Guardian? They’ve already got fast pages, but they still have to create separate AMP pages if they want to get the pre-rendering benefits when they show up in Google search results. Sorry, says Google, but it’s the only way we can trust that the pre-rendered page will be truly fast.

So here’s the impasse we’re at. Google have provided a list of best practices for making fast web pages, but the only way they can truly verify that a page is sticking to those best practices is by hosting their own copy, URLs be damned.

This was the crux of Paul’s argument when he was on the Shop Talk Show podcast (it’s a really good episode—I was genuinely reassured to hear that Paul is not gung-ho about drinking the AMP Kool Aid; he has genuine concerns about the potential downsides for the web).

Initially, I accepted this argument that Google just can’t trust the rest of the web. But the more I talked to people at AMP Conf—and I had some really, really good discussions with people away from the stage—the more I began to question it.

Here’s the thing: the regular Google search can’t guarantee that any web page is actually 100% the right result to return for a search. Instead there’s a lot of fuzziness involved: based on the content, the markup, and the number of trusted sources linking to this, it looks like it should be a good result. In other words, Google search trusts websites to—by and large—do the right thing. Sometimes websites abuse that trust and try to game the system with sneaky tricks. Google responds with penalties when that happens.

Why can’t it be the same for AMP pages? Let me host my own AMP pages (maybe even host my own AMP script) and then when the Googlebot crawls those pages—the same as it crawls any other pages—that’s when it can verify that the AMP page is abiding by the rules. If I do something sneaky and trick Google into flagging a page as fast when it actually isn’t, then take my pre-rendering reward away from me.

To be fair, Google has very, very strict rules about what and how to pre-render the AMP results it’s caching. I can see how allowing even the potential for a false positive would have a negative impact on the user experience of Google search. But c’mon, there are already false positives in regular search results—fake news, spam blogs. Googlers are smart people. They can solve—or at least mitigate—these problems.

Google says it can’t trust our self-hosted AMP pages enough to pre-render them. But they ask for a lot of trust from us. We’re supposed to trust Google to cache and host copies of our pages. We’re supposed to trust Google to provide some mechanism to users to get at the original canonical URL. I’d like to see trust work both ways.