Journal tags: technology

64

A memex in every web browser

When Mathew Modine’s character first shows up in Christopher Nolan’s Oppenheimer, I figured the rest of the cinema audience wouldn’t have appreciated me shouting out “VANNEVAR BUSH IN THE HOUSE!” so I screamed it on the inside.

The Manhattan Project was not his only claim to fame or infamy. When it comes to the world we now live in, Bush’s idea of the memex has been almost equally influential. His article As We May Think became a touchstone for Douglas Engelbart and later Tim Berners-Lee.

But as Matt Thompson points out:

…the device he describes does not resemble the internet or anything I’ve ever found on it.

Then he says:

What Bush was describing sounds to me like what you might get if you turned a browser history — the most neglected piece of the software — into a robust and fully featured machine of its own. It would help you map the path you charted through a web of knowledge, refine those maps, order them, and share them

Yes! This!! I 100% agree with the description of browser history as “the most neglected piece of the software.” While I wouldn’t go as far as Chris when he says web browsers kind of suck, I’m kind of amazed that there hasn’t been more innovation and competition in this space.

If anything we’ve outsourced the management of our browsing history to services like Delicious and Pinboard, or to tools like Obsidian and Roam Research. Heck, the links section of my website is my attempt to manage and annotate my own associative trails.

Imagine if that were baked right into a web browser. Then imagine how beautiful such a rich source of data might look.

Like Matt says:

I don’t think anything like this exists. So Bush’s essay still transfixes me.

border:none 2023

In 2013, I spoke at the border:none event in Nuremberg. I gave a talk called The Power of Simplicity.

It was a great little event. Most of the talks were, like mine, on technical topics; design, development, the usual conference material.

This year Joschi and Marc decided to have another border:none event ten years on from the first one. They invited back all the original speakers, as well as some new folks. They kept the ticket price the same as ten years ago—just thirty euros.

For us speakers from the previous event, the only brief they gave us was to consider what’s happened in the past decade. I played it pretty safe and talked about the web. I’ll post a transcript of my talk soon.

Some of the other speakers were far more ambitious. They spoke about themselves, the world, the meaning of life …my presentation was very tame in comparison.

I really, really admire the honesty and vulnerability that those people displayed. Tobias Baldauf in particular took my breath away. He delivered an intensely personal talk on generational trauma that was meticulously researched and took incredible bravery to deliver. It was worth going to Nuremberg just for the privilege of being present for that talk.

Other talks were refreshingly tech-free. There was a talk on cold-water swimming. There was a talk on paragliding. And I don’t mean they were saying “what designers can learn from cold-water swimming” or “how I became a better developer through paragliding.” The talks were literally about swimming and paragliding.

There was a great variety of speakers this time around, include age ranges from puberty to menopause (quite literally—that was the topic of one of the talks). I had the great pleasure of providing some coaching before the event to fifteen year old Maya who was delivering her first talk in English. She did a fantastic job! And the talk she gave—about how teachers in her school aren’t always trusting of the technology they provide to students—was directly relevant to what we’re seeing in the world of work. Give people autonomy, agency and trust.

There was a lot of trust at border:none. Everyone who bought a ticket did so on trust—they had no idea what to expect. Likewise, Marc and Joschi put their trust in the speakers. They gave the speakers the freedom to talk about whatever they wanted. That trust was repayed.

Florian took some superb pictures of the event. Matthias wrote up his experience. So did Tom. Valisis shared the gist of his excellent talk.

At the end of the event there was some joking about returning in 2033. I love the idea of a conference that happens once every ten years. Count me in!

Decision time

I’ve always associated good design with thoughtfulness. Like, I should be able to point to any element in an interface and the designer should be able to tell me the reasons it’s there. Those reasons may be rooted in user needs or asthetics or some other consideration, but the point is that there’s a justification for it. Justify every pixel!

But I’ve come to realise that this is a bit reductionist. Now when I point at an interface element, I still expect the designer to be able to justify its inclusion, but I’d also like to know the trade-offs that were made.

Suppose there’s a large hero image. I’m sure the designer would have no problem justifying its inclusion on the basis of impact and the emotional heft it delivers. But did they also understand the potential downsides? Were they aware of the performance implications of including a large image?

I hope the answer to both questions is yes. They understood the costs, but they decided that, on balance, the positives outweighed the negatives.

When it comes to the positives, universal principles of design often apply. Colour theory, typography, proximity, and so on. But the downsides tend to be specific to the medium that the design is delivered in.

Let’s say you’re designing for print. You want to include an extra typeface just for footnotes. No problem. There isn’t really a downside. In print, you can use all the typefaces you want. But if this were for the web, then the calculation would be different. Every extra typeface comes with a performance penalty. A decision that might be justified in one medium might not work in another medium.

It works both ways; on the web you can use all the colours you want, without incurring any penalties, but in print—depending on the process you’re using—you might have to weigh up that decision very differently.

From this perspective, every design decision is like a balance sheet. A good web designer understands the benefits and the costs behind each decision they make.

It’s a similar story when it comes to web development. Heck, we even have the term “tech debt” to describe decisions that we know aren’t for the best in the long term.

In fact, I’d say that consideration of the long-term effects is something that should play a bigger part in technical decisions.

When we’re weighing up the pros and cons of using a particular tool, we have a tendency to think in the here and now. How might this help me right now? How might this hinder me right now?

But often a decision that delivers short-term gain may well end up delivering long-term pain.

Alexander Petros describes this succinctly:

Reopen a node repository after 3 months and you’ll find that your project is mired in a flurry of security warnings, backwards-incompatible library “upgrades,” and a frontend framework whose cultural peak was the exact moment you started the project and is now widely considered tech debt.

When I wrote about making the Patterns Day website I described my process as doing it “the long hard stupid way”—a term that Frank coined in a talk he gave a few years back. But perhaps my hands-on approach is only long, hard and stupid in the short time. With each passing year, the codebase will retain a degree of readability and accessibility that I would’ve sacrificed had I depended on automated build processes.

Robin Berjon puts this into the historical perspective of Taylorism and Luddism:

Whenever something is automated, you lose some control over it. Sometimes that loss of control improves your life because exerting control is work, and sometimes it worsens your life because it reduces your autonomy.

Or as Marshall McLuhan put it:

Every extension is also an amputation.

…which is fine as long as the benefits of the extension outweigh the costs of the amputation. My worry is that, when it comes to evaluating technology for building on the web, we aren’t considering the longer-term costs.

Maintenance matters. With the passing of time, maintenance matters more and more.

Maybe we avoid thinking about the long-term costs because it would lead to decision paralysis. That’s understandable. But I take comfort from some words of wisdom on the web from the 1990s. Tim Berners-Lee’s style guide for hypertext:

Because hypertext is potentially unconstrained you are a little daunted. Do not be. You can write a document as simply as you like. In many ways, the simpler the better.

Increment by increment

The bedrock of the World Wide Web is solid. Built atop the protocols of the internet (TCP/IP), its fundamental building blocks remain: URLs of HTML files transmitted over HTTP. Baldur Bjarnason writes:

Even today, the web is like living fossil, a preserved relic from a different era. Anybody can put up a website. Anybody can run a business over it. I can build an app or service, send the URL to anybody I like, and most people in the world will be able to run it without asking anybody’s permission.

Still, the web has evolved. In fact, that evolution is something that’s also built into its fundamental design. Rather than try to optimise the World Wide Web for one particular use-case, Tim Berners-Lee realised the power of being flexible. Like the internet, the World Wide Web is deliberately dumb.

(I get very annoyed when people talk about the web as being designed for scientific work at CERN. That was merely the first use-case. The web was designed for everything …and nothing in particular.)

Robin Berjon compares the web’s evolution to the ship of Theseus:

That’s why it’s been so hard to agree about what the Web is: the Web is architected for resilience which means that it adapts and transforms. That flexibility is the reason why I’m talking about some mythological dude’s boat. Altogether too often, we consider some aspects of the Web as being invariants when they’re potentially just as replaceable as any other part. This isn’t to say that there are no invariants on the Web.

The web can be changed. That’s both a comfort and a warning. There’s plenty that we should change about today’s web. But there’s also plenty—at the root level—that we should fight to preserve.

And if you want change, the worst way to go about it is to promulgate the notion of burning everything down and starting from scratch. As Erin says in the fourth and final part of her devastating series on Meta in Myanmar:

We don’t get a do-over planet. We won’t get a do-over network.

Instead, we have to work with the internet we made and find a way to rebuild and fortify it to support the much larger projects of repair—political, cultural, environmental—that are required for our survival.

Though, as Robin points out, that doesn’t preclude us from sharing a vision:

Proceeding via small, incremental changes can be a laudable approach, but even then it helps to have a sense for what it is that those small steps are supposed to be incrementing towards.

I’m looking forward to reading what Robin puts forward, particularly because he says “I’m no technosolutionist.”

From a technical perspective, the web has never been better. We have incredible features in HTML, CSS, and JavaScript, all standardised and with amazing interoperability between browsers. The challenges that face the web today are not technical.

That’s one of the reasons why I have no patience for the web3 crowd. Apart from the ridiculous name, they’re focusing on exactly the wrong part of the stack.

Listening to their pitch, they’ll point out that while yes, the fundamental bedrock of the web is indeed decentralised—TCP/IP, HTTP(S)—what’s been constructed on that foundation is increasingly centralised; the power brokers of Google, Meta, Amazon.

And what’s the solution they propose? Replace the underlying infrastructure with something-something-blockchain.

Would that it were so simple.

The problems of today’s web are not technical in nature. The problems of today’s web won’t be solved by technology. If we’re going to solve the problems of today’s web, we’ll need to do it through law, culture, societal norms, and co-operation.

(Feel free to substitute “today’s web” with “tomorrow’s climate”.)

Five books

Quite a few people have been linking to this list on The Verge of what they consider the greatest tech books of all time.

To be clear, this is a fairly narrow definition of technology. It’s really a list of books about the history of computing. But there’s some great stuff in there.

I’ve been thinking the books about computing and technology that I’ve managed to get around to reading, and which ones made an impact on me. Some of these made it on The Verge’s list too, which is nice to see.

Broad Band by Clare L. Evans

I was blown away by the writing and the stories uncovered in “the untold story of the women who made the internet.” Here’s what I wrote when I read the book:

This book is pretty much the perfect mix. The topic is completely compelling—a history of women in computing. The stories are rivetting—even when I thought I knew the history, this showed me how little I knew. And the voice of the book is pure poetry.

It’s not often that I read a book that I recommend wholeheartedly to everyone. I prefer to tailor my recommendations to individual situations. But in the case of Broad Band, I honesty think that anyone would enjoy it.

Uncanny Valley by Anna Wiener

I read this one in 2020, not too long after it came out. In my end of year round-up, I described it like this:

A terrific memoir. It’s open and honest, and just snarky enough when it needs to be.

Close to the Machine by Ellen Ullman

I read this in 2018, many years after it first came out. Here’s how it came across to me:

Lots of ’90s feels in this memoir. A lot of this still resonates today. It’s kind of fascinating to read it now with the knowledge of how this whole internet thing would end up going.

Abolish Silicon Valley by Wendy Liu

This book is mostly excellent. But as I wrote when I got my hands on an advance copy, the juxtaposition of memoir and manifesto didn’t work for me:

Abolish Silicon Valley is 80% memoir and 20% manifesto. I worry that the marketing isn’t making that clear. It would be a shame if this great book didn’t find its audience.

The Victorian Internet by Tom Standage

Okay, this isn’t technically about computing, it’s about the telegraph. But it’s got the word “internet” in the title, and it’s a terrific read. Here’s what I wrote when I put it in Matt’s book-vending machine:

A book about the history of telegraphy might not sound like the most riveting read, but The Victorian Internet is both fascinating and entertaining. Techno-utopianism, moral panic, entirely new ways of working, and a world that has been utterly transformed: the parallels between the telegraph and the internet are laid bare. In fact, this book made me realise that while the internet has been a great accelerator, the telegraph was one of the few instances where a technology could truly be described as “disruptive.”

When Jason linked to the list of books on The Verge he said:

I’m baffled that Tracy Kidder’s amazing The Soul of a New Machine didn’t make the top 5 or even 10.

I’m more surprised that this book is held in such high esteem. It has not aged well. I read it in 2019 and had this to say:

This is a well-regarded book amongst people whose opinion I value. It’s also a Pulitzer prize winner. Strange, then, that I found it so unengaging. The prose is certainly written with gusto, but it all seems so very superficial to me. No matter how you dress it up, it’s a chronicle of a bunch of guys—and oh, boy, are they guys—making a commercial computer. Testosterone and solder—not my cup of tea.

Nailspotting

I’m sure you’ve heard the law of the instrument: when all you have is a hammer, everything looks like a nail.

There’s another side to it. If you’re selling hammers, you’ll depict a world full of nails.

Recent hammers include cryptobollocks and virtual reality. It wasn’t enough for blockchains and the metaverse to be potentially useful for some situations; they staked their reputations on being utterly transformative, disrupting absolutely every facet of life.

This kind of hype is a terrible strategy in the long-term. But if you can convince enough people in the short term, you can make a killing on the stock market. In truth, the technology itself is superfluous. It’s the hype that matters. And if the hype is over-inflated enough, you can even get your critics to do your work for you, broadcasting their fears about these supposedly world-changing technologies.

You’d think we’d learn. If an industry cries wolf enough times, surely we’d become less trusting of extraordinary claims. But the tech industry continues to cry wolf—or rather, “hammer!”—at regular intervals.

The latest hammer is machine learning, usually—incorrectly—referred to as Artificial Intelligence. What makes this hype cycle particularly infuriating is that there are genuine use cases. There are some nails for this hammer. They’re just not as plentiful as the breathless hype—both positive and negative—would have you believe.

When I was hosting the DiBi conference last week, there was a little section on generative “AI” tools. Matt Garbutt covered the visual side, demoing tools like Midjourney. Scott Salisbury covered the text side, showing how you can generate code. Afterwards we had a panel discussion.

During the panel I asked some fairly straightforward questions that nobody could answer. Who owns the input (the data used by these generative tools)? Who owns the output?

On the whole, it stayed quite grounded and mercifully free of hyperbole. Both speakers were treating the current crop of technologies as tools. Everyone agreed we were on the hype cycle, probably the peak of inflated expectations, looking forward to reaching the plateau of productivity.

Scott explicitly warned people off using generative tools for production code. His advice was to stick to side projects for now.

Matt took a closer look at where these tools could fit into your day-to-day design work. Mostly it was pretty sensible, except when he suggested that there could be any merit to using these tools as a replacement for user testing. That’s a terrible idea. A classic hammer/nail mismatch.

I think I moderated the panel reasonably well, but I have one regret. I wish I had first read Baldur Bjarnason’s new book, The Intelligence Illusion. I started reading it on the train journey back from Edinburgh but it would have been perfect for the panel.

The Intelligence Illusion is very level-headed. It is neither pro- nor anti-AI. Instead it takes a pragmatic look at both the benefits and the risks of using these tools in your business.

It has excellent advice for spotting genuine nails. For example:

Generative AI has impressive capabilities for converting and modifying seemingly unstructured data, such as prose, images, and audio. Using these tools for this purpose has less copyright risk, fewer legal risks, and is less error prone than using it to generate original output.

Think about transcripts of videos or podcasts—an excellent use of this technology. As Baldur puts it:

The safest and, probably, the most productive way to use generative AI is to not use it as generative AI. Instead, use it to explain, convert, or modify.

He also says:

Prefer internal tools over externally-facing chatbots.

That chimes with what I’ve been seeing. The most interesting uses of this technology that I’ve seen involve a constrained dataset. Like the way Luke trained a language model on his own content to create a useful chat interface.

Anyway, The Intelligence Illusion is full of practical down-to-earth advice based on plenty of research backed up with copious citations. I’m only halfway through it and it’s already helped me separate the hype from the reality.

Disclosure

You know how when you’re on hold to any customer service line you hear a message that thanks you for calling and claims your call is important to them. The message always includes a disclaimer about calls possibly being recorded “for training purposes.”

Nobody expects that any training is ever actually going to happen—surely we would see some improvement if that kind of iterative feedback loop were actually in place. But we most certainly want to know that a call might be recorded. Recording a call without disclosure would be unethical and illegal.

Consider chatbots.

If you’re having a text-based (or maybe even voice-based) interaction with a customer service representative that doesn’t disclose its output is the result of large language models, that too would be unethical. But, at the present moment in time, it would be perfectly legal.

That needs to change.

I suspect the necessary legislation will pass in Europe first. We’ll see if the USA follows.

In a way, this goes back to my obsession with seamful design. With something as inherently varied as the output of large language models, it’s vital that people have some way of evaluating what they’re told. I believe we should be able to see as much of the plumbing as possible.

The bare minimum amount of transparency is revealing that a machine is in the loop.

This shouldn’t be a controversial take. But I guarantee we’ll see resistance from tech companies trying to sell their “AI” tools as seamless, indistinguishable drop-in replacements for human workers.

Guessing

The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.

Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.

We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.

As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.

But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.

The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.

There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.

Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.

Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.

The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!

And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.

Using AI, we will guess who should get a mortgage.

Using AI, we will guess who should get hired.

Using AI, we will guess who should get a strict prison sentence.

Reframed like that, it’s easy to see why technologists want to bury the lede.

Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.

(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)

Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.

But what if this is an interface problem?

Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.

What if the more fanciful guesses were marked as such?

As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?

I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”

I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.

A little bit of programmed humility could go a long way.

Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.

Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.

I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.

I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.

After all, to guess is human.

Like

We use metaphors all the time. To quote George Lakoff, we live by them.

We use analogies some of the time. They’re particularly useful when we’re wrapping our heads around something new. By comparing something novel to something familiar, we can make a shortcut to comprehension, or at least, categorisation.

But we need a certain amount of vigilance when it comes to analogies. Just because something is like something else doesn’t mean it’s the same.

With that in mind, here are some ways that people are describing generative machine learning tools. Large language models are like…

Brandolini’s blockchain

I’ve already written about how much I enjoyed hosting Leading Design San Francisco last week.

All the speakers were terrific. Lola’s talk was particularly …um, interesting:

In this talk, Lola will share her adventures in the world of blockchain, the hostility she experienced in her first go-round in 2018, and why she’s chosen to head back to a technology that is going through its largest reputational and social crisis to date.

Wait …I was supposed to stand on stage and introduce a talk that was (at least partly) about blockchain? I have opinions.

As it turned out, Lola warned me that I’d be making an appearance in her talk. She was going to quote that blog post. Before the talk, I asked her how obnoxious I could be about blockchain in her intro. She told me to bring it.

So in the introduction, I deployed all the sarcasm I had in me and said:

Listen, we designers have a tendency to be over-critical of things sometimes. There are all these ideas that we dismiss: phrenology, homeopathy, flat-earthism …blockchain. Haters gonna hate.

I remember somebody asking online a while back, “Why the hate for web3?” And someone I know responded by saying “We hate it because we understand it.” I think there’s a lot of truth to that.

But look, just because blockchains are powering crypto ponzi schemes and N F fucking Ts, it’s worth remembering that it’s also simply a technology. It’s a technological solution in search of a problem.

To be fair, it’s still early days. After all, it’s only been over a decade now.

It’s like the law of instrument says; when all you have is a hammer, everything looks like a nail. Blockchain is like that. Except the hammer is also made of glass.

Anyway, Lola is going to defend the indefensible and talk about blockchain. One thing to keep in mind is this: remember when everyone was talking about “The Cloud”? And then it turned out that you could substitute the phrase “someone else’s server” for “The Cloud?” Well, every time you hear Lola say the word “blockchain”, I’d like you to mentally substitute the phrase “multiple copies of a spreadsheet.”

Please give an open mind and a warm welcome to Lola Oyelayo Pearson!

I got some laughs. I also got lots of gasps and pearl-clutching, as though I were saying something taboo. Welcome to San Francisco.

Lola gave as good as she got. I got a roasting in her talk.

And just to clarify, Lola and I are friends—this was a consensual smackdown.

There was a very serious point to Lola’s talk. Cryptobollocks and other blockchain-powered schemes have historically been very bro-y, and exploitative of non-bro communities. Lola wants to fight that trend.

I get it. But it reminds me a bit of the justifications you hear from people who go to work at Facebook claiming that they can do more good from the inside. Whatever helps you sleep at night.

The crux of Lola’s belief is this: blockchain technology is inevitable, therefore it is uncumbent on us as ethical designers to ensure that the technology is deployed in a way that empowers people instead of exploiting them.

But I take issue with the premise. Blockchain technology is not inevitable. That’s the worst kind of technological determinism. It’s defeatist. It’s a depressing view of “progress” driven not by people, but by technological forces beyond our control.

I refuse to accept that anti-humanist deterministic view.

In any case, for technological determinism to have any validity, there needs to be something to it. At least virtual reality and machine learning are based on some actual technologies. In the case of cryptobollocks, there is no there there. There is nothing except the hype, which is why you’ll see blockchain enthusiasts trying to ride the coattails of trending technologies in a logical fallacy that goes something like this:

  1. There are technologies that will be really big in the future,
  2. blockchain is a technology, therefore
  3. blockchain will be really big in the future.

Blockchain is bullshit. It isn’t even very clever bullshit. And it certainly isn’t inevitable.

You can call me AI

I’ve mentioned before that I’m not a fan of initialisms and acronyms. They can be exclusionary.

It bothers me doubly when everyone is talking about AI.

First of all, the term is so vague as to be meaningless. Sometimes—though rarely—AI refers to general artificial intelligence. Sometimes AI refers to machine learning. Sometimes AI refers to large language models. Sometimes AI refers to a series of if/else statements. That’s quite a spectrum of meaning.

Secondly, there’s the assumption that everyone understands the abbreviation. I guess that’s generally a safe assumption, but sometimes AI could refer to something other than artificial intelligence.

In countries with plenty of pastoral agriculture, if someone works in AI, it usually means they’re going from farm to farm either extracting or injecting animal semen. AI stands for artificial insemination.

I think that abbreviation might work better for the kind of things currently described as using AI.

We were discussing this hot topic at work recently. Is AI coming for our jobs? The consensus was maybe, but only the parts of our jobs that we’re more than happy to have automated. Like summarising some some findings. Or perhaps as a kind of lorem ipsum generator. Or for just getting the ball rolling with a design direction. As Terence puts it:

Midjourney is great for a first draft. If, like me, you struggle to give shape to your ideas then it is nothing short of magic. It gets you through the first 90% of the hard work. It’s then up to you to refine things.

That’s pretty much the conclusion we came to in our discussion at Clearleft. There’s no way that we’d use this technology to generate outputs for clients, but we certainly might use it to generate inputs. It’s like how we’d do a quick round of sketching to get a bunch of different ideas out into the open. Terence is spot on when he says:

Midjourney lets me quickly be wrong in an interesting direction.

To put it another way, using a large language model could be a way of artificially injecting some seeds of ideas. Artificial insemination.

So now when I hear people talk about using AI to create images or articles, I don’t get frustrated. Instead I think, “Using artificial insemination to create images or articles? Yes, that sounds about right.”

In between

I was chatting with my new colleague Alex yesterday about a link she had shared in Slack. It was the Nielsen Norman Group’s annual State of Mobile User Experience report.

There’s nothing too surprising in there, other than the mention of Apple’s app clips and Google’s instant apps.

Remember those?

Me neither.

Perhaps I lead a sheltered existence, but as an iPhone user, I don’t think I’ve come across a single app clip in the wild.

I remember when they were announced. I was quite worried about them.

See, the one thing that the web can (theoretically) offer that native can’t is instant access to a resource. Go to this URL—that’s it. Whereas for a native app, the flow is: go to this app store, find the app, download the app.

(I say that the benefit is theoretical because the website found at the URL should download quickly—the reality is that the bloat of “modern” web development imperils that advantage.)

App clips—and instant apps—looked like a way to route around the convoluted install process of native apps. That’s why I was nervous when they were announced. They sounded like a threat to the web.

In reality, the potential was never fulfilled (if my own experience is anything to go by). I wonder why people didn’t jump on app clips and instant apps?

Perhaps it’s because what they promise isn’t desirable from a business perspective: “here’s a way for users to accomplish their tasks without downloading your app.” Even though app clips can in theory be a stepping stone to installing the full app, from a user’s perspective, their appeal is the exact opposite.

Or maybe they’re just too confusing to understand. I think there’s an another technology that suffers from the same problem: progressive web apps.

Hear me out. Progressive web apps are—if done well—absolutely amazing. You get all of the benefits of native apps in terms of UX—they even work offline!—but you retain the web’s frictionless access model: go to a URL; that’s it.

So what are they? Are they websites? Yes, sorta. Are they apps? Yes, sorta.

That’s confusing, right? I can see how app clips and instant apps sound equally confusing: “you can use them straight away, like going to a web page, but they’re not web pages; they’re little bits of apps.”

I’m mostly glad that app clips never took off. But I’m sad that progressive web apps haven’t taken off more. I suspect that their fates are intertwined. Neither suffer from technical limitations. The problem they both face is inertia:

The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.

True of progressive web apps. Equally true of app clips.

But when I was chatting to Alex, she made me look at app clips in a different way. She described a situation where somebody might need to interact with some kind of NFC beacon from their phone. Web NFC isn’t supported in many browsers yet, so you can’t rely on that. But you don’t want to make people download a native app just to have a quick interaction. In theory, an app clip—or instant app—could do the job.

In that situation, app clips aren’t a danger to the web—they’re polyfills for hardware APIs that the web doesn’t yet support!

I love having my perspective shifted like that.

The specific situations that Alex and I were discussing were in the context of museums. Musuems offer such interesting opportunities for the physical and the digital to intersect.

Remember the pen from Cooper Hewitt? Aaron spoke about it at dConstruct 2014—a terrific presentation that’s well worth revisiting and absorbing.

The other dConstruct talk that’s very relevant to this liminal space between the web and native apps is the 2012 talk from Scott Jenson. I always thought the physical web initiative had a lot of promise, but it may have been ahead of its time.

I loved the thinking behind the physical web beacons. They were deliberately dumb, much like the internet itself. All they did was broadcast a URL. That’s it. All the smarts were to be found at the URL itself. That meant a service could get smarter over time. It’s a lot easier to update a website than swap out a piece of hardware.

But any kind of technology that uses Bluetooth, NFC, or other wireless technology has to get over the discovery problem. They’re invisible technologies, so by default, people don’t know they’re even there. But if you make them too discoverable— intrusively announcing themselves like one of the commercials in Minority Report—then they’re indistinguishable from spam. There’s a sweet spot of discoverability right in the middle that’s hard to get right.

Over the past couple of years—accelerated by the physical distancing necessitated by The Situation—QR codes stepped up to the plate.

They still suffer from some discoverability issues. They’re not human-readable, so you can’t be entirely sure that the URL you’re going to go to isn’t going to be a Rick Astley video. But they are visible, which gives them an advantage over hidden wireless technologies.

They’re cheaper too. Printing a QR code sticker costs less than getting a plastic beacon shipped from China.

QR codes turned out to be just good enough to bridge the gap between the physical and digital for those one-off interactions like dining outdoors during a pandemic:

I can see why they chose the web over a native app. Online ordering is the only way to place your order at this place. Telling people “You have to go to this website” …that seems reasonable. But telling people “You have to download this app” …that’s too much friction.

Ironically, the nail in the coffin for app clips and instant apps might’ve been hammered in by Apple and Google when they built QR-code recognition into their camera software.

One morning in the future

I had a video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.

“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”

There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.

It also made me stop and think about how amazing it was that we were having the call in the first place. I remembered Arthur C. Clarke’s predictions from 1964:

I’m thinking of the incredible breakthrough which has been possible by developments in communications, particularly the transistor and, above all, the communications satellite.

These things will make possible a world in which we can be in instant contact with each other wherever we may be, where we can contact our friends anywhere on Earth even if we don’t know their actual physical location.

It will be possible in that age—perhaps only 50 years from now—for a man to conduct his business from Tahiti or Bali just as well as he could from London.

The casual sexism of assuming that it would be a “man” conducting business hasn’t aged well. And it’s not the communications satellite that enabled my video call, but old-fashioned undersea cables, many in the same locations as their telegraphic antecedents. But still; not bad, Arthur.

After my call, I caught up on some email. There was a new newsletter from Ariel who’s currently in Antarctica.

Just thinking about the fact that I know someone who’s in Antarctica—who sent me a postcard from Antarctica—gave me another rush of feeling like I was living in the future. As I started to read the contents of the latest newsletter, that feeling became even more specific. Doesn’t this sound exactly like something straight out of a late ’80s/early ’90s cyberpunk novel?

Four of my teammates head off hiking towards the mountains to dig holes in the soil in hopes of finding microscopic animals contained within them. I hang back near the survival bags with the remaining teammate and begin unfolding my drone to get a closer look at the glaciers. After filming the textures of the land and ice from multiple angles for 90 minutes, my batteries are spent, my hands are cold and my stomach is growling. I land the drone, fold it up into my bright yellow Pelican case, and pull out an expired granola bar to keep my hunger pangs at bay.

Mars distracts

A few years ago, I wrote about how much I enjoyed the book Aurora by Kim Stanley Robinson.

Not everyone liked that book. A lot of people were put off by its structure, in which the dream of interstellar colonisation meets the harsh truth of reality and the book follows where that leads. It pours cold water over the very idea of humanity becoming interplanetary.

But our own solar system is doable, right? I mean, Kim Stanley Robinson is the guy who wrote the Mars trilogy and 2312, both of which depict solar system colonisation in just a few centuries.

I wonder if the author might regret the way that some have taken his Mars trilogy as a sort of manual, Torment Nexus style. Kim Stanley Robinson is very much concerned with this planet in this time period, but others use his work to do the opposite.

But the backlash to Mars has begun.

Maciej wrote Why Not Mars:

The goal of this essay is to persuade you that we shouldn’t send human beings to Mars, at least not anytime soon. Landing on Mars with existing technology would be a destructive, wasteful stunt whose only legacy would be to ruin the greatest natural history experiment in the Solar System. It would no more open a new era of spaceflight than a Phoenician sailor crossing the Atlantic in 500 B.C. would have opened up the New World. And it wouldn’t even be that much fun.

Manu Saadia is writing a book about humanity in space, and he has a corresponding newsletter called Against Mars: Space Colonization and its Discontents:

What if space colonization was merely science-fiction, a narrative, or rather a meta-narrative, a myth, an ideology like any other? And therefore, how and why did it catch on? What is so special and so urgent about space colonization that countless scientists, engineers, government officials, billionaire oligarchs and indeed, entire nations, have committed work, ingenuity and treasure to make it a reality.

What if, and hear me out, space colonization was all bullshit?

I mean that quite literally. No hyperbole. Once you peer under the hood, or the nose, of the rocket ship, you encounter a seemingly inexhaustible supply of ghoulish garbage.

Two years ago, Shannon Stirone went into the details of why Mars Is a Hellhole

The central thing about Mars is that it is not Earth, not even close. In fact, the only things our planet and Mars really have in common is that both are rocky planets with some water ice and both have robots (and Mars doesn’t even have that many).

Perhaps the most damning indictment of the case for Mars colonisation is that its most ardent advocate turns out to be an idiotic small-minded eugenicist who can’t even run a social media company, much less a crewed expedition to another planet.

But let’s be clear: we’re talking here about the proposition of sending humans to Mars—ugly bags of mostly water that probably wouldn’t survive. Robots and other uncrewed missions in our solar system …more of that, please!

JavaScript

A recurring theme in my writing and talks is “lay off the JavaScript, people!” But I have to make a conscious effort to specify that I mean client-side JavaScript.

I thought it would be obvious from the context that I was talking about the copious amounts of JavaScript being shipped to end users to download, parse, and execute. But nothing’s ever really obvious. If I don’t explicitly say JavaScript in the browser, then someone inevitably thinks I’m having a go at JavaScript, the language.

I have absolutely nothing against JavaScript the language. Just like I have nothing against Python or Ruby or any other language that you might write with on your machine or your web server. But as soon as you deliver bytes over the wire, I start having opinions. It just so happens that JavaScript is the universal language for client-side coding so that’s why I call for restraint with JavaScript specifically.

There was a time when JavaScript only existed in web browsers. That changed with Node. Now it’s possible to write code for your web server and code for web browsers using the same language. Very handy!

But just because it’s the same language doesn’t mean you should treat it the same in both circumstance. As Remy puts it:

There are two JavaScripts.

One for the server - where you can go wild.

One for the client - that should be thoughtful and careful.

I was reading something recently that referred to Eleventy as a JavaScript library. It really brought me up short. I mean, on the one hand, yes, it’s a library of code and it’s written in JavaScript. It is absolutely technically correct to call it a JavaScript library.

But in my mind, a JavaScript library is something you ship to web browsers—jQuery, React, Vue, and so on. Whereas Eleventy executes its code in order to generate HTML and that’s what gets sent to end users. Conceptually, it’s like the opposite of a JavaScript library. Eleventy does its work before any user requests a URL—JavaScript libraries do their work after a user requests a URL.

To me it seems obvious that there should an entirely different mindset for writing code intended for a web browser. But nothing’s ever really obvious.

I remember when Node was getting really popular and npm came along as a way to manage all the bundles of code that people were assembling in their Node programmes. Makes total sense. But then I thought I heard about people using npm to do the same thing for client-side code. “That can’t be right!” I thought. I must’ve misunderstood. So I talked to someone from npm and explained how I must be misunderstanding something.

But it turned out that people really were treating client-side JavaScript no different than server-side JavaScript. People really were pulling in megabytes of other people’s code to ship to end users so that they could, I dunno, left pad numbers or something.

Listen, I don’t care what you get up to in the privacy of your own codebase. But don’t poison the well of the web with profligate client-side JavaScript.

Directory enquiries

I was talking to someone recently about a forgotten battle in the history of the early web. It was a battle between search engines and directories.

These days, when the history of the web is told, a whole bunch of services get lumped into the category of “competitors who lost to Google search”: Altavista, Lycos, Ask Jeeves, Yahoo.

But Yahoo wasn’t a search engine, at least not in the same way that Google was. Yahoo was a directory with a search interface on top. You could find what you were looking for by typing or you could zero in on what you were looking for by drilling down through a directory structure.

Yahoo wasn’t the only directory. DMOZ was an open-source competitor. You can still experience it at DMOZlive.com:

The official DMOZ.com site was closed by AOL on February 17th 2017. DMOZ Live is committed to continuing to make the DMOZ Internet Directory available on the Internet.

Search engines put their money on computation, or to use today’s parlance, algorithms (or if you’re really shameless, AI). Directories put their money on humans. Good ol’ information architecture.

It turned out that computation scaled faster than humans. Search won out over directories.

Now an entire generation has been raised in the aftermath of this battle. Monica Chin wrote about how this generation views the world of information:

Catherine Garland, an astrophysicist, started seeing the problem in 2017. She was teaching an engineering course, and her students were using simulation software to model turbines for jet engines. She’d laid out the assignment clearly, but student after student was calling her over for help. They were all getting the same error message: The program couldn’t find their files.

Garland thought it would be an easy fix. She asked each student where they’d saved their project. Could they be on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. “What are you talking about?” multiple students inquired. Not only did they not know where their files were saved — they didn’t understand the question.

Gradually, Garland came to the same realization that many of her fellow educators have reached in the past four years: the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students.

Dr. Saavik Ford confirms:

We are finding a persistent issue with getting (undergrad, new to research) students to understand that a file/directory structure exists, and how it works. After a debrief meeting today we realized it’s at least partly generational.

We live in a world ordered only by search:

While some are quite adept at using labels, tags, and folders to manage their emails, others will claim that there’s no need to do because you can easily search for whatever you happen to need. Save it all and search for what you want to find. This is, roughly speaking, the hot mess approach to information management. And it appears to arise both because search makes it a good-enough approach to take and because the scale of information we’re trying to manage makes it feel impossible to do otherwise. Who’s got the time or patience?

There are still hold-outs. You can prise files from Scott Jenson’s cold dead hands.

More recently, Linus Lee points out what we’ve lost by giving up on directory structures:

Humans are much better at choosing between a few options than conjuring an answer from scratch. We’re also much better at incrementally approaching the right answer by pointing towards the right direction than nailing the right search term from the beginning. When it’s possible to take a “type in a query” kind of interface and make it more incrementally explorable, I think it’s almost always going to produce a more intuitive and powerful interface.

Directory structures still make sense to me (because I’m old) but I don’t have a problem with search. I do have a problem with systems that try to force me to search when I want to drill down into folders.

I have no idea what Google Drive and Dropbox are doing but I don’t like it. They make me feel like the opposite of a power user. Trying to find a file using their interfaces makes me feel like I’m trying to get a printer to work. Randomly press things until something happens.

Anyway. Enough fist-shaking from me. I’m going to ponder Linus’s closing words. Maybe defaulting to a search interface is a cop-out:

Text search boxes are easy to design and easy to add to apps. But I think their ease on developers may be leading us to ignore potential interface ideas that could let us discover better ideas, faster.

On reading

On Tyranny by Timothy Snyder is a very short book. Most of the time, this is a feature, not a bug.

There are plenty of non-fiction books I’ve read that definitely could’ve been much, much shorter. Books that have a good sensible idea, but one that could’ve been written on the back of a napkin instead of being expanded into an arbitrarily long form.

In the world of fiction, there’s the short story. I guess the equivelent in the non-fiction world is the essay. But On Tyranny isn’t an essay. It’s got chapters. They’re just really, really short.

Sometimes that brevity means that nuance goes out the window. What might’ve been a subtle argument that required paragraphs of pros and cons in another book gets reduced to a single sentence here. Mostly that’s okay.

The premise of the book is that Trump’s America is comparable to Europe in the 1930s:

We are no wiser than the Europeans who saw democracy yield to fascism, Nazism, or communism. Our one advantage is that we might learn from their experience.

But in making the comparison, Synder goes all in. There’s very little accounting for the differences between the world of the early 20th century and the world of the early 21st century.

This becomes really apparent when it comes to technology. One piece of advice offered is:

Make an effort to separate yourself from the internet. Read books.

Wait. He’s not actually saying that words on screens are in some way inherently worse than words on paper, is he? Surely that’s just the nuance getting lost in the brevity, right?

Alas, no:

Staring at screens is perhaps unavoidable but the two-dimensional world makes little sense unless we can draw upon a mental armory that we have developed somewhere else. … So get screens out of your room and surround yourself with books.

I mean, I’m all for reading books. But books are about what’s in them, not what they’re made of. To value words on a page more than the same words on a screen is like judging a book by its cover; its judging a book by its atoms.

For a book that’s about defending liberty and progress, On Tyranny is puzzingly conservative at times.

Re-evaluating technology

There’s a lot of emphasis put on decision-making: making sure you’re making the right decision; evaluating all the right factors before making a decision. But we rarely talk about revisiting decisions.

I think perhaps there’s a human tendency to treat past decisions as fixed. That’s certainly true when it comes to evaluating technology.

I’ve been guilty of this. I remember once chatting with Mark about something written in PHP—probably something I had written—and I made some remark to the effect of “I know PHP isn’t a great language…” Mark rightly called me on that. The language wasn’t great in the past but it has come on in leaps and bounds. My perception of the language, however, had not updated accordingly.

I try to keep that lesson in mind whenever I’m thinking about languages, tools and frameworks that I’ve investigated in the past but haven’t revisited in a while.

Andy talks about this as the tech tool carousel:

The carousel is like one of those on a game show that shows the prizes that can be won. The tool will sit on there until I think it’s gone through enough maturing to actually be a viable tool for me, the team I’m working with and the clients I’m working for.

Crucially a carousel is circular: tools and technologies come back around for re-evaluation. It’s all too easy to treat technologies as being on a one-way conveyer belt—once they’ve past in front of your eyes and you’ve weighed them up, that’s it; you never return to re-evaluate your decision.

This doesn’t need to be a never-ending process. At some point it becomes clear that some technologies really aren’t worth returning to:

It’s a really useful strategy because some tools stay on the carousel and then I take them off because they did in fact, turn out to be useless after all.

See, for example, anything related to cryptobollocks. It’s been well over a decade and blockchains remain a solution in search of problems. As Molly White put it, it’s not still the early days:

How long can it possibly be “early days”? How long do we need to wait before someone comes up with an actual application of blockchain technologies that isn’t a transparent attempt to retroactively justify a technology that is inefficient in every sense of the word? How much pollution must we justify pumping into our atmosphere while we wait to get out of the “early days” of proof-of-work blockchains?

Back to the web (the actual un-numbered World Wide Web)…

Nolan Lawson wrote an insightful article recently about how he senses that the balance has shifted away from single page apps. I’ve been sensing the same shift in the zeitgeist. That said, both Nolan and I keep an eye on how browsers are evolving and getting better all the time. If you weren’t aware of changes over the past few years, it would be easy to still think that single page apps offer some unique advantages that in fact no longer hold true. As Nolan wrote in a follow-up post:

My main point was: if the only reason you’re using an SPA is because “it makes navigations faster,” then maybe it’s time to re-evaluate that.

For another example, see this recent XKCD cartoon:

“You look around one day and realize the things you assumed were immutable constants of the universe have changed. The foundations of our reality are shifting beneath our feet. We live in a house built on sand.”

The day I discovered that Apple Maps is kind of good now

Perhaps the best example of a technology that warrants regular re-evaluation is the World Wide Web itself. Over the course of its existence it has been seemingly bettered by other more proprietary technologies.

Flash was better than the web. It had vector graphics, smooth animations, and streaming video when the web had nothing like it. But over time, the web caught up. Flash was the hare. The World Wide Web was the tortoise.

In more recent memory, the role of the hare has been played by native apps.

I remember talking to someone on the Twitter design team who was designing and building for multiple platforms. They were frustrated by the web. It just didn’t feel as fully-featured as iOS or Android. Their frustration was entirely justified …at the time. I wonder if they’ve revisited their judgement since then though.

In recent years in particular it feels like the web has come on in leaps and bounds: service workers, native JavaScript APIs, and an astonishing boost in what you can do with CSS. Most important of all, the interoperability between browsers is getting better and better. Universal support for new web standards arrives at a faster rate than ever before.

But developers remain suspicious, still prefering to trust third-party libraries over native browser features. They made a decision about those libraries in the past. They evaluated the state of browser support in the past. I wish they would re-evaluate those decisions.

Alas, inertia is a very powerful force. Sticking with a past decision—even if it’s no longer the best choice—is easier than putting in the effort to re-evaluate everything again.

What’s the phrase? “Strong opinions, weakly held.” We’re very good at the first part and pretty bad at the second.

Just the other day I was chatting with one of my colleagues about an online service that’s available on the web and also as a native app. He was showing me the native app on his phone and said it’s not a great app.

“Why don’t you add the website to your phone?” I asked.

“You know,” he said. “The website’s going to be slow.”

He hadn’t tested this. But years of dealing with crappy websites on his phone in the past had trained him to think of the web as being inherently worse than native apps (even though there was nothing this particular service was doing that required any native functionality).

It has become a truism now. Native apps are better than the web.

And you know what? Once upon a time, that would’ve been true. But it hasn’t been true for quite some time …at least from a technical perspective.

But even if the technologies in browsers have reached parity with native apps, that won’t matter unless we can convince people to revisit their previously-formed beliefs.

The technologies are the easy bit. Getting people to re-evaluate their opinions about technologies? That’s the hard part.

TEDxBrighton 2022

I went to TEDxBrighton on Friday. I didn’t actually realise it was happening until just a couple of days beforehand, but I once I knew, I figured I should take advantage of it being right here in my own town.

All in all, it was a terrific day. The MCing by Adam Pearson was great—just the right mix of enthusiasm and tongue-in-cheek humour. The curation of the line-up worked well too. The day was broken up into four loosely-themed sections. As I’m currently in the process of curating an event myself, I can appreciate how challenging it is.

Each section opened with a musical act. Again, having been involved behind the scenes with many events myself, I was impressed by the audaciousness, just from a logistical perspective. It all went relatively smoothly.

The talks at a TED or TEDx event can be a mixed bag. You can have a scientist on stage distilling years of research into a succint message followed by someone talking nonsense about some pseudo-psychological self-help scheme. But at TEDxBrighton, we lucked out.

A highlight for me was Dr James Mannion talking about implementation science—something that felt directly applicable to design work. Victoria Jenkins was also terrific, and again, her points about inclusive design felt very relevant. And of course I really enjoyed the space-based talks by Melissa Thorpe and Bianca Cefalo. Now that I think about it, just about everyone was great: Katie Vincent, Lewis Wedlock, Dina Nayeri—they all wowed me.

With one exception. There was a talk that was supposed to be about the future of democracy. In reality it quickly veered into DAOs before descending into a pitch for crypto and NFTs. The call to action was literally for everyone in the audience to go out and get a crypto wallet and buy an NFT …using ethereum no less! We were exhorted to use an unbelievably wasteful and energy-intensive proof-of-work technology to get our hands on a receipt for a JPG …from the same stage that would later highlight the work of climate activists like Tommie Eaton. It was really quite disgusting. The fear-based message of the talk was literally about getting in on the scheme before it’s too late. At one point we were told to “do the research.” I’m surprised we weren’t all told that we’re “not going to make it.”

A disgraceful shill for a ponzi scheme would’ve ruined any other event. Fortunately the line-up at TEDxBrighton was so strong that one scam artist couldn’t torpedo the day. Just like crypto itself—and associated bollocks like NFTs and web3—it was infuriating to have to sit through it in the short term, but then it just faded away into insignificance. One desperate peddler of snake oil couldn’t make a dent in an otherwise great day.

Declarative design

I feel like in the past few years there’s been a number of web design approaches that share a similar mindset. Intrinsic web design by Jen; Every Layout by Andy and Heydon; Utopia by Trys and James.

To some extent, their strengths lie in technological advances in CSS: flexbox, grid, calc, and so on. But more importantly, they share an approach. They all focus on creating the right inputs rather than trying to control every possible output. Leave the final calculations for those outputs to the browser—that’s what computers are good at.

As Andy puts it:

Be the browser’s mentor, not its micromanager.

Reflecting on Utopia’s approach, Jim Nielsen wrote:

We say CSS is “declarative”, but the more and more I write breakpoints to accommodate all the different ways a design can change across the viewport spectrum, the more I feel like I’m writing imperative code. At what quantity does a set of declarative rules begin to look like imperative instructions?

In contrast, one of the principles of Utopia is to be declarative and “describe what is to be done rather than command how to do it”. This approach declares a set of rules such that you could pick any viewport width and, using a formula, derive what the type size and spacing would be at that size.

Declarative! Maybe that’s the word I’ve been looking for to describe the commonalities between Utopia, Every Layout, and intrinsic web design.

So if declarative design is a thing, does that also mean imperative design is also a thing? And what might the tools and technologies for imperative design look like?

I think that Tailwind might be a good example of an imperative design tool. It’s only about the specific outputs. Systematic thinking is actively discouraged; instead you say exactly what you want the final pixels on the screen to be.

I’m not saying that declarative tools—like Utopia—are right and that imperative tools—like Tailwind—are wrong. As always, it depends. In this case, it depends on the mindset you have.

If you agree with this statement, you should probably use an imperative design tool:

CSS is broken and I want my tools to work around the way CSS has been designed.

But if you agree with this statement, you should probably use a declarative design tool:

CSS is awesome and I want my tools to amplify the way that CSS had been designed.

If you agree with the first statement but you then try using a declarative tool like Utopia or Every Layout, you will probably have a bad time. You’ll probably hate it. You may declare the tool to be “bad”.

Likewise if you agree with the second statement but you then try using an imperative tool like Tailwind, you will probably have a bad time. You’ll probably hate it. You may declare the tool to be “bad”.

It all depends on whether the philosophy behind the tool matches your own philosophy. If those philosophies match up, then using the tool will be productive and that tool will act as an amplifier—a bicycle for the mind. But if the philosophy of the tool doesn’t match your own philosophy, then you will be fighting the tool at every step—it will slow you down.

Knowing that this spectrum exists between declarative tools and imperative tools can help you when you’re evaluating technology. You can assess whether a web design tool is being marketed on the premise that CSS is broken or on the premise that CSS is awesome.

I wonder whether your path into web design and development might also factor into which end of the spectrum you’d identify with. Like, if your background is in declarative languages like HTML and CSS, maybe intrisic web design really resonates. But if your background is in imperative languages like JavaScript, perhaps Tailwind makes more sense to you.

Again, there’s no right or wrong here. This is about matching the right tool to the right mindset.

Personally, the declarative design approach fits me like a glove. It feels like it’s in the tradition of John’s A Dao Of Web Design or Ethan’s Responsive Web Design—ways of working with the grain of the web.