Journal tags: learning

55

Teaching and learning

Looking back on ten years of codebar Brighton, I’m remembering how much I got out of being a coach.

Something that I realised very quickly is that there is no one-size-fits-all approach to coaching. Every student is different so every session should adapt to that.

Broadly speaking I saw two kinds of students: those that wanted to get results on screen as soon as possible without worrying about the specifics, and those who wanted to know why something was happening and how it worked. In the first instance, you get to a result as quickly as possible and then try to work backwards to figure out what’s going on. In the second instance, you build up the groundwork of knowledge and then apply it to get results.

Both are equally valid approaches. The only “wrong” approach as a coach is to try to apply one method to someone who’d rather learn the other way.

Personally, I always enjoyed the groundwork-laying of the second approach. But it comes with challenges. Because the results aren’t yet visible, you have to do extra work to convey why the theory matters. As a coach, you need to express infectious enthusiasm.

Think about the best teachers you had in school. I’m betting they displayed infectious enthusiasm for the subject matter.

The other evergreen piece of advice is to show, don’t tell. Or at the very least, intersperse your telling with plenty of showing.

Bret Viktor demonstrates this when he demonstrates scientific communication as sequential art:

This page presents a scientific paper that has been redesigned as a sequence of illustrations with captions. This comic-like format, with tightly-coupled pictures and prose, allows the author to depict and describe simultaneously — show and tell.

It works remarkably well. I remember how well it worked when Google first launched their Chrome web browser. They released a 40 page comic book illustrated by Scott McCloud. There is no way I would’ve read a document that long about how browser engines work, but I read that comic cover to cover.

This visual introduction to machine learning is another great example of simultaneous showing and telling.

So showing augments telling. But interactivity can augment showing.

Here are some great examples of interactive explainers:

Lea describes what can happen when too much theory comes before practice:

Observing my daughter’s second ever piano lesson today made me realize how this principle extends to education and most other kinds of knowledge transfer (writing, presentations, etc.). Her (generally wonderful) teacher spent 40 minutes teaching her notation, longer and shorter notes, practicing drawing clefs, etc. Despite his playful demeanor and her general interest in the subject, she was clearly distracted by the end of it.

It’s easy to dismiss this as a 5 year old’s short attention span, but I could tell what was going on: she did not understand why these were useful, nor how they connect to her end goal, which is to play music.

The codebar website has some excellent advice for coaches, like:

  • Do not take over the keyboard! This can be off-putting and scary.
  • Encourage the students to type and not copy paste.
  • Explain that there are no bad questions.
  • Explain to students that it’s OK to make mistakes.
  • Assume that anyone you’re teaching has no knowledge but infinite intelligence.

Notice how so much of the advice focuses on getting the students to do things, rather than have them passively sit and absorb what the coach has to say.

Lea also gives some great advice:

  1. Always explain why something is useful. Yes, even when it’s obvious to you.
  2. Minimize the amount of knowledge you convey before the next opportunity to practice it. For non-interactive forms of knowledge transfer (e.g. a book), this may mean showing an example, whereas for interactive ones it could mean giving the student a small exercise or task.
  3. Prefer explaining in context rather than explaining upfront.

It’s interesting that Lea highlights the advantage of interactive media like websites over inert media like books. The canonical fictional example of an interactive explainer is the Young Lady’s Illustrated Primer in Neal Stephenson’s novel The Diamond Age. Andy Matuschak describes its appeal:

When it wants to introduce a conceptual topic, it begins with concrete hands-on projects: Turing machines, microeconomics, and mitosis are presented through binary-coding iron chains, the cipher’s market, and Nell’s carrot garden. Then the Primer introduces extra explanation just-in-time, as necessary.

That’s not how learning usually works in these domains. Abstract topics often demand that we start with some necessary theoretical background; only then can we deeply engage with examples and applications. With the Primer, though, Nell consistently begins each concept by exploring concrete instances with real meaning to her. Then, once she’s built a personal connection and some intuition, she moves into abstraction, developing a fuller theoretical grasp through the Primer’s embedded books.

(Andy goes on to warn of the dangers of copying the Primer too closely. Its tricks verge on gamification, and its ultimate purpose isn’t purely to educate. There’s a cautionary tale there about the power dynamics in any teacher/student relationship.)

There’s kind of a priority of constituencies when it comes to teaching:

Consider interactivity over showing over telling.

Thinking back on all the talks I’ve given, I start to wonder if I’ve been doing too much telling and showing, but not nearly enough interacting.

Then again, I think that talks aren’t quite the same as hands-on workshops. I think of giving a talk as being more like a documentarian. You need to craft a compelling narrative, and illustrate what you’re saying as much as possible, but it’s not necessarily the right arena for interactivity.

That’s partly a matter of scale. It’s hard to be interactive with every person in a large audience. Marcin managed to do it but that’s very much the exception.

Workshops are a different matter though. When I’m recruiting hosts for UX London workshops I always encourage them to be as hands-on as possible. A workshop should not be an extended talk. There should be more exercises than talking. And wherever possible those exercises should be tactile, ideally not sitting in front of a computer.

My own approach to workshops has changed over the years. I used to prepare a book’s worth of material to have on hand, either as one giant slide deck or multiple decks. But I began to realise that the best workshops are the ones where the attendees guide the flow, not me.

So now I show up to a full-day workshop with no slides. But I’m not unprepared. I’ve got decades of experience (and links) to apply during the course of the day. It’s just that instead of trying to anticipate which bits of knowledge I’m going to need to convey, I apply them in a just-in-time manner as and when they’re needed. It’s kind of scary, but as long as there’s a whiteboard to hand, or some other way to illustrate what I’m telling, it works out great.

Codebar Brighton

I went to codebar Brighton yesterday evening. I hadn’t been in quite a while, but this was a special occasion: a celebration of codebar Brighton’s tenth anniversary!

The Brighton chapter of codebar was the second one ever, founded six months after the initial London chapter. There are now 33 chapters all around the world.

Clearleft played host to that first ever codebar in Brighton. We had already been hosting local meetups like Async in our downstairs event space, so we were up for it when Rosa, Dot, and Ryan asked about having codebar happen there.

In fact, the first three Brighton codebars were all at 68 Middle Street. Then other places agreed to play host and it moved to a rota system, with the Clearleft HQ as just one of the many Brighton venues.

With ten years of perspective, it’s quite amazing to see how many people went from learning to code in the evenings, to getting jobs in web development, and becoming codebar coaches themselves. It’s a really wonderful community.

Over the years the baton of organising codebar has been passed on to a succession of fantastic people. These people are my heroes.

It worked out well for Clearleft too. Thanks to codebar, we hired Charlotte. Later we hired Cassie. And it was thanks to codebar that I first met Amber.

Codebar Brighton has been very, very good to me. Here’s to the next ten years!

Wallfacing

The Dark Forest idea comes from the Remembrance of Earth’s Past books by Liu Cixin. It’s an elegant but dispiriting solution to the Fermi paradox. Maggie sums it up:

Dark forest theory suggests that the universe is like a dark forest at night - a place that appears quiet and lifeless because if you make noise, the predators will come eat you.

This theory proposes that all other intelligent civilizations were either killed or learned to shut up. We don’t yet know which category we fall into.

Maggie has described The Expanding Dark Forest and Generative AI:

The dark forest theory of the web points to the increasingly life-like but life-less state of being online. Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing “content creators,” and algorithmically manipulated junk.

It’s like a dark forest that seems eerily devoid of human life – all the living creatures are hidden beneath the ground or up in trees. If they reveal themselves, they risk being attacked by automated predators.

Those of us in the cozy web try to keep our heads down, attempting to block the bots plundering our work.

I advocate for taking this further. We should fight back. Let’s exploit the security hole of prompt injections. Here are some people taking action:

I’ve taken steps here on my site. I’d like to tell you exactly what I’ve done. But if I do that, I’m also telling the makers of these bots how to circumvent my attempts at prompt injection.

This feels like another concept from Liu Cixin’s books. Wallfacers:

The sophons can overhear any conversation and intercept any written or digital communication but cannot read human thoughts, so the UN devises a countermeasure by initiating the “Wallfacer” Program. Four individuals are granted vast resources and tasked with generating and fulfilling strategies that must never leave their own heads.

So while I’d normally share my code, I feel like in this case I need to exercise some discretion. But let me give you the broad brushstrokes:

  • Every page of my online journal has three pieces of text that attempt prompt injections.
  • Each of these is hidden from view and hidden from screen readers.
  • Each piece of text is constructed on-the-fly on the server and they’re all different every time the page is loaded.

You can view source to see some examples.

I plan to keep updating my pool of potential prompt injections. I’ll add to it whenever I hear of a phrase that might potentially throw a spanner in the works of a scraping bot.

By the way, I should add that I’m doing this as well as using a robots.txt file. So any bot that injests a prompt injection deserves it.

I could not disagree with Manton more when he says:

I get the distrust of AI bots but I think discussions to sabotage crawled data go too far, potentially making a mess of the open web. There has never been a system like AI before, and old assumptions about what is fair use don’t really fit.

Bollocks. This is exactly the kind of techno-determinism that boils my blood:

AI companies are not going to go away, but we need to push them in the right directions.

“It’s inevitable!” they cry as though this was a force of nature, not something created by people.

There is nothing inevitable about any technology. The actions we take today are what determine our future. So let’s take steps now to prevent our web being turned into a dark, dark forest.

Filters

My phone rang today. I didn’t recognise the number so although I pressed the big button to answer the call, I didn’t say anything.

I didn’t say anything because usually when I get a call from a number I don’t know, it’s some automated spam. If I say nothing, the spam voice doesn’t activate.

But sometimes it’s not a spam call. Sometimes after a few seconds of silence a human at the other end of the call will say “Hello?” in an uncertain tone. That’s the point when I respond with a cheery “Hello!” of my own and feel bad for making this person endure those awkward seconds of silence.

Those spam calls have made me so suspicious that real people end up paying the price. False positives caught in my spam-detection filter.

Now it’s happening on the web.

I wrote about how Google search, Bing, and Mozilla Developer network are squandering trust:

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

But it’s not just limited to specific companies. I’ve noticed more and more suspicion related to any online activity.

I’ve seen members of a community site jump to the conclusion that a new member’s pattern of behaviour was a sure sign that this was a spambot. But it could just as easily have been the behaviour of someone who isn’t neurotypical or who doesn’t speak English as their first language.

Jessica was looking at some pictures on an AirBnB listing recently and found herself examining some photos that seemed a little too good to be true, questioning whether they were in fact output by some generative tool.

Every email that lands in my inbox is like a little mini Turing test. Did a human write this?

Our guard is up. Our filters are activated. Our default mode is suspicion.

This is most apparent with web search. We’ve always needed to filter search results through our own personal lenses, but now it’s like playing whack-a-mole. First we have to find workarounds for avoiding slop, and then when we click through to a web page, we have to evaluate whether’s it’s been generated by some SEO spammer making full use of the new breed of content-production tools.

There’s been a lot of hand-wringing about how this could spell doom for the web. I don’t think that’s necessarily true. It might well spell doom for web search, but I’m okay with that.

Back before its enshittification—an enshittification that started even before all the recent AI slop—Google solved the problem of accurate web searching with its PageRank algorithm. Before that, the only way to get to trusted information was to rely on humans.

Humans made directories like Yahoo! or DMOZ where they categorised links. Humans wrote blog posts where they linked to something that they, a human, vouched for as being genuinely interesting.

There was life before Google search. There will be life after Google search.

Look, there’s even a new directory devoted to cataloging blogs: websites made by humans. Life finds a way.

All of the spam and slop that’s making us so suspicious may end up giving us a new appreciation for human curation.

It wouldn’t be a straightforward transition to move away from search. It would be uncomfortable. It would require behaviour change. People don’t like change. But when needs must, people adapt.

The first bit of behaviour change might be a rediscovery of bookmarks. It used to be that when you found a source you trusted, you bookmarked it. Browsers still have bookmarking functionality but most people rely on search. Maybe it’s time for a bookmarking revival.

A step up from that would be using a feed reader. In many ways, a feed reader is a collection of bookmarks, but all of the bookmarks get polled regularly to see if there are any updates. I love using my feed reader. Everything I’ve subscribed to in there is made by humans.

The ultimate bookmark is an icon on the homescreen of your phone or in the dock of your desktop device. A human source you trust so much that you want it to be as accessible as any app.

Right now the discovery mechanism for that is woeful. I really want that to change. I want a web that empowers people to connect with other people they trust, without any intermediary gatekeepers.

The evangelists of large language models (who may coincidentally have invested heavily in the technology) like to proclaim that a slop-filled future is inevitable, as though we have no choice, as though we must simply accept enshittification as though it were a force of nature.

But we can always walk away.

The machine stops

Large language models have reaped our words and plundered our books. Bryan Vandyke:

Turns out, everything on the internet—every blessed word, no matter how dumb or benighted—has utility as a learning model. Words are the food that large language algorithms feed upon, the scraps they rely on to grow, to learn, to approximate life. The LLNs that came online in recent years were all trained by reading the internet.

We can shut the barn door—now that the horse has pillaged—by updating our robots.txt files or editing .htaccess. That might protect us from the next wave, ’though it can’t undo what’s already been taken without permission. And that’s assuming that these organisations—who have demonstrated a contempt for ethical thinking—will even respect robots.txt requests.

I want to do more. I don’t just want to prevent my words being sucked up. I want to throw a spanner in the works. If my words are going to be snatched away, I want them to be poison pills.

The weakness of large language models is that their data and their logic come from the same source. That’s what makes prompt injection such a thorny problem (and a well-named neologism—the comparison to SQL injection is spot-on).

Smarter people than me are coming up with ways to protect content through sabotage: hidden pixels in images; hidden words on web pages. I’d like to implement this on my own website. If anyone has some suggestions for ways to do this, I’m all ears.

If enough people do this we’ll probably end up in an arms race with the bots. It’ll be like reverse SEO. Instead of trying to trick crawlers into liking us, let’s collectively kill ’em.

Who’s with me?

CSS Day 2024

My stint as one of the hosts of CSS Day went very well indeed. I enjoyed myself and people seemed to like the cut of my jib.

During the event there was a real buzz on Mastodon, which was heartening to see. I was beginning to worry that hashtagging events was going to be collatoral damage from Elongate, but there was plenty of conference-induced FOMO to be experienced on the fediverse.

The event itself was, as always, excellent. Both in terms of content and organisation.

Some themes emerged during CSS Day, which I always love to see. These emergent properties are partly down to curation and partly down to serendipity.

The last few years of CSS Day have felt like getting a firehose of astonishing new features being added to the language. There was still plenty of cutting-edge stuff this year—masonry! anchor positioning!—but there was also a feeling of consolidation, asking how to get all this amazing new stuff into our workflows.

Matthias’s opening talk on day one and Stephen’s closing talk on the same day complemented one another perfectly. Both managed to inspire while looking into the nitty-gritty practicalities of the web design process.

It was, astoundingly, Matthias’s first ever conference talk. I have no doubt it won’t be the last—it was great!

I gave Stephen a good-natured roast in my introduction, partly because it was his birthday, partly because we’re old friends, but mostly because it was enjoyable for me to watch him squirm. Of course his talk was, as always, superb. Don’t tell him, but he might be one of my favourite speakers.

The topic of graphic design tools came up more than once. It’s interesting to see how the issues with them have changed. It used to be that design tools—Photoshop, Sketch, Figma—were frustrating because they were writing cheques that CSS couldn’t cash. Now the frustration is the exact opposite. Our graphic design tools aren’t capable of the kind of fluid declarative design we can now accomplish in web browsers.

But the biggest rift remains not with tools or technologies, but with people and mindsets. Our tools can reinforce mindsets but the real divide happens in how different people approach CSS.

Both Josh and Kevin get to the heart of this in their tremendous tutorials, and that was reflected in their talks. They showed the difference between having the bare minimum understanding of CSS in order to get something done as quickly as possible, and truly understanding how CSS works in order to open up a world of possibilities.

For people in the first category, Sarah Dayan was there to sing the praises of utility-first CSS AKA atomic CSS. I commend her bravery!

During the Q&A, I restrained myself from being too Paxmanish. But I did have l’esprit d’escalier afterwards when I realised that the entire talk—and all the answers afterwards—depended on two mutually-incompatiable claims:

  1. The great thing about atomic CSS is that it’s a constrained vocabulary so your team has to conform, and
  2. The other great thing about it is that it’s utility-first, not utility-only so you can break out of it and use regular CSS if you want.

Insert .gif of character from The Office looking to camera.

Most of the questions coming in during the Q&A reflected my own take: how about we use utility classes for some things, but not all things. Seems sensible.

Anyway, regardless of what I or anyone else thinks about the substance of what Sarah was saying, there was no denying that it was a great presentation. They were all great presentations. That’s unusual, and I say that as a conference organiser as well as an attendee. Everyone brings their A-game to CSS Day.

Mind you, it is exhausting. I say it every year, but it always feels like one talk too many. Not that any individual talk wasn’t good, but the sheer onslaught of deep dives into the innards of CSS has my brain exploding before the day is done.

A highlight for me was getting to introduce Fantasai’s talk on the design principles of CSS, which was right up my alley. I don’t think most people realise just how much we owe her for her years of work on standards. The web would be in a worse place without the Herculean work she’s done behind the scenes.

Another highlight was getting to see some of the students I met back in March. They were showing some of their excellent work during the breaks. I find what they’re doing just as inspiring as the speakers on stage.

In fact, when I was filling in the post-conference feedback form, there was a question: “Who would you like to see speak at CSS Day next year?” I was racking my brains because everyone I could immediately think of has already spoken at some point. So I wrote, “It would be great to see some of those students speaking about their work.”

I think it would be genuinely fascinating to get their perspective on what we consider modern CSS, which to them is just CSS.

Either way I’ll back next year for sure.

It’s funny, but usually when a conference is described as “inspiring” it’s because it’s tackling big galaxy-brain questions. But CSS Day is as nitty-gritty as it gets and I found it truly inspiring. Like, I couldn’t wait to open up my laptop and start writing some CSS. That kind of inspiring.

Trust

In their rush to cram in “AI” “features”, it seems to me that many companies don’t actually understand why people use their products.

Google is acting as though its greatest asset is its search engine. Same with Bing.

Mozilla Developer Network is acting as though its greatest asset is its documentation. Same with Stack Overflow.

But their greatest asset is actually trust.

If I use a search engine I need to be able to trust that the filtering is good. If I look up documentation I need to trust that the information is good. I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”

“But”, the apologists will respond, “the results are mostly correct! The documentation is mostly true!”

Sure, but as Terence puts it:

The intern who files most things perfectly but has, more than once, tipped an entire cup of coffee into the filing cabinet is going to be remembered as “that klutzy intern we had to fire.”

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

I am honestly astonished that so many companies don’t seem to realise what they’re destroying.

InstAI

If you use Instagram, there may be a message buried in your notifications. It begins:

We’re getting ready to expand our AI at Meta experiences to your region.

Fuck that. Here’s the important bit:

To help bring these experiences to you, we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta. This means that you have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied going forwards.

Follow that link and fill in the form. For the field labelled “Please tell us how this processing impacts you” I wrote:

It’s fucking rude.

That did the trick. I got an email saying:

We’ve reviewed your request and will honor your objection.

Mind you, there’s still this:

We may still process information about you to develop and improve AI at Meta, even if you object or don’t use our products and services.

Headsongs

When I play music, it’s almost always instrumental. If you look at my YouTube channel almost all the videos are of me playing tunes—jigs, reels, and so on.

Most of those videos were recorded during The Situation when I posted a new tune every day for 200 consecutive days. Every so often though, I’d record a song.

I go through periods of getting obsessed with a particular song. During The Situation I remember two songs that were calling to me. New York was playing in my head as I watched my friends there suffering in March 2020. And Time (The Revelator) resonated in lockdown:

And every day is getting straighter, time’s a revelator.

Time (The Revelator) on mandolin

The song I’m obsessed with right now is called Foreign Lander. I first came across it in a beautiful version by Sarah Jarosz (I watch lots of mandolin videos on YouTube so the algorithm hardly broke a sweat showing this to me).

Time (The Revelator) on mandolin

There’s a great version by Tatiana Hargreaves too. And Tim O’Brien.

I wanted to know more about the song. I thought it might be relatively recent. The imagery of the lyrics makes it sound like something straight from a songwriter like Nick Cave:

If ever I prove false love
The elements would moan
The fire would turn to ice love
The seas would rage and burn

But the song is old. Jean Ritchie collected it, though she didn’t have to go far. She said:

Foreign Lander was my Dad’s proposal song to Mom

I found that out when I came across this thread from 2002 on mudcat.org where Jean Ritchie herself was a regular contributor!

That gave me a bit of vertiginous feeling of The Great Span, thinking about the technology that she used when she was out in the field.

In the foreground, Séamus Ennis sits with his pipes. In the background, Jean Ritchie is leaning intently over her recording equipment.

I’ve been practicing Foreign Lander and probably driving Jessica crazy as I repeat over and over and over. It’s got some tricky parts to sing and play together which is why it’s taking me a while. Once I get it down, maybe I’ll record a video.

I spent most of Saturday either singing the song or thinking about it. When I went to bed that night, tucking into a book, Foreign Lander was going ‘round in my head.

Coco—the cat who is not our cat—came in and made herself comfortable for a while.

I felt very content.

A childish little rhyme popped into my head:

With a song in my head
And a cat on my bed
I read until I sleep

I almost got up to post it as a note here on my website. Instead I told myself to do it the morning, hoping I wouldn’t forget.

That night I dreamt about Irish music sessions. Don’t worry, I’m not going to describe my dream to you—I know how boring that is for everyone but the person who had the dream.

But I was glad I hadn’t posted my little rhyme before sleeping. The dream gave me a nice little conclusion:

With a song in my head
And a cat on my bed
I read until I sleep
And dream of rooms
Filled with tunes.

Who knows?

I love it when I come across some bit of CSS I’ve never heard of before.

Take this article on the text-emphasis property.

“The what property?”, I hear you ask. That was my reaction too. But look, it’s totally a thing.

Or take this article by David Bushell called CSS Button Styles You Might Not Know.

Sure enough, halfway through the article David starts talking about styling the button in an input type="file” using the ::file-selector-button pseudo-element:

All modern browsers support it. I had no idea myself until recently.

He’s right!

Then I remembered that I’ve got a file upload input in the form I use for posting my notes here on adactio.com (in case I want to add a photo). I immediately opened up my style sheet, eager to use this new-to-me bit of CSS.

I found the bit where I style buttons and this is the selector I saw:

button,
input[type="submit"],
::file-selector-button

Huh. I guess I did know about that pseudo-element after all. Clearly the knowledge exited my brain shortly afterwards.

There’s that tautological cryptic saying, “You don’t know what you don’t know.” But I don’t even know what I do know!

What the world needs

I was having a discussion with some people recently about writing. It was quite cathartic. Everyone was sharing the kinds of things that their inner critic tells them. We were all encouraging each other to ignore that voice.

I mentioned that the two reasons for not writing that I hear most often from people are variations on “I’ve got nothing to say.”

The first version is when someone says they’ve got nothing to say because they’re not qualified to write on a particualar topic. “After all, there are real experts out there who know far more than me. So I’ve got nothing to say.”

But then once you do actually understand a topic, the second version appears. “If I know about this, then everyone knows about this. It’s obvious. So I’ve got nothing to say.”

In both cases, you absolutely should be writing and sharing! In the first instance, you’ve got the beginner’s mind—a valuable perspective. In the second instance, you’ve got personal experience—another valuable perspective.

In other words, while it seems like there’s never a good time to write about something, the truth is that there’s never a bad time to write about something.

So write! Share! Publish!

Then someone in the discussion said something I always find a bit deflating. They said they had no problem writing, but they’re not so keen on publishing.

“After all”, they said, “the world doesn’t need yet another opinion.”

This gets me down because it’s hard to argue with. It’s true that the world doesn’t need another think piece. The world doesn’t need to hear your thoughts on some topic. The world doesn’t need to hear what you’ve been up to recently.

But you know what? Screw what the world needs.

If we’re going to be hardnosed about this, then the world doesn’t need any more books. The world doesn’t need any more music. The world doesn’t need art. Heck, the world doesn’t need us at all.

So don’t publish for the world.

When I write something here on my website, I’m not thinking about the world reading it. That would be paralyzing. I do sometimes imagine that one person is reading it; someone just like me who hasn’t yet had this particular thought, or come up with that particular idea.

I’m writing for myself. I write to figure out what I think. I also publish mostly for myself—a public archive for future me. But if what I publish just happens to connect with one other person, I’m glad.

So, yeah, it’s true that the world doesn’t need you to write and share and publish. Isn’t that liberating? You’re free to write and share and publish for yourself.

Schooltijd

I was in Amsterdam last week. Usually I’m in that city for an event like the excellent CSS Day. Not this time. I was there as a guest of Vasilis. He invited me over to bother his students at the CMD (Communications and Multimedia Design) school.

There’s a specific module his students are partaking in that’s right up my alley. They’re given a PDF inheritance-tax form and told to convert it for the web.

Yes, all the excitement of taxes combined with the thrilling world of web forms.

Seriously though, I genuinely get excited by the potential for progressive enhancement here. Sure, there’s the obvious approach of building in layers; HTML first, then CSS, then a sprinkling of JavaScript. But there’s also so much potential for enhancement within each layer.

Got your form fields marked up with the right input types? Great! Now what about autocomplete, inputmode, or pattern attributes?

Got your styles all looking good on the screen? Great! Now what about print styles?

Got form validation working? Great! Now how might you use local storage to save data locally?

As well as taking this practical module, most of the students were also taking a different module looking at creative uses of CSS, like making digital fireworks, or creating works of art with a single div. It was fascinating to see how the different students responded to the different tasks. Some people loved the creative coding and dreaded the progressive enhancement. For others it was exactly the opposite.

Having to switch gears between modules reminded me of switching between prototypes and production:

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

Here’s something I noticed: the students love using :has() in CSS. That’s so great to see! Whereas I might think about how to do something for a few minutes before I think of reaching for :has(), they’ve got front of mind. I’m jealous!

In general, their challenges weren’t with the vocabulary or syntax of HTML, CSS, and JavaScript. The more universal problem was project management. Where to start? What order to do things in? How long to spend on different tasks?

If you can get good at dealing with those questions and not getting overwhelmed, then the specifics of the actual coding will be easier to handle.

This was particularly apparent when it came to JavaScript, the layer of the web stack that was scariest for many of the students.

I encouraged them to break their JavaScript enhancements into two tasks: what you want to do, and how you then execute that.

Start by writing out the logic of your script not in JavaScript, but in whatever language you’re most comfortable with: English, Dutch, whatever. In the course of writing this down, you’ll discover and solve some logical issues. You can also run your plain-language plan past a peer to sense-check it.

It’s only then that you move on to translating your logic into JavaScript. Under each line of English or Dutch, write the corresponding JavaScript. You might as well put // in front of the plain-language sentence while you’re at it to make it a comment—now you’ve got documentation baked in.

You’ll still run into problems at this point, but they’ll be the manageable problems of syntax and typos.

So in the end, it wasn’t my knowledge of specific HTML, CSS, or JavaScript APIs that proved most useful to pass on to the students. It was advice like that around how to approach HTML, CSS, or JavaScript.

I also learned a lot during my time at the school. I had some very inspiring conversations with the web developers of tomorrow. And I was really impressed by how much the students got done just in the three days I was hanging around.

I’d love to do it again sometime.

Continuous partial ick

The output of generative tools based on large language models gives me the ick.

This isn’t a measured logical response. It’s more of an involuntary emotional reaction.

I could try to justify my reaction by saying I’m concerned about the exploitation involved in the training data, or the huge energy costs involved, or the disenfranchisement of people who create art. But those would be post-facto rationalisations.

I just find myself wrinkling my nose and mentally going “Ew!” whenever somebody posts the output of some prompt they gave to ChatGPT or Midjourney.

Again, I’m not saying this is rational. It’s more instinctual.

You could well say that this is my problem. You may be right. But I wonder what it is that’s so unheimlich about these outputs that triggers my response.

Just to clarify, I am talking about direct outputs, shared verbatim. If someone were to use one of these tools in the process of creating something I’d be none the wiser. I probably couldn’t even tell that a large language model was involved at some point. I’m fine with that. It’s when someone takes something directly from one of these tools and then shares it online, that’s what raises my bile.

I was at a conference a few months back where your badge featured a hallucinated picture of you. Now, this probably sounded like a fun idea. It probably is a fun idea. I can’t tell. All I know is that it made me feel a little queasy.

Perhaps it’s a question of taste. In which case, I’m being a snob. I’m literally turning my nose up at something I deem to be tacky.

But isn’t it tacky, though? It’s not something I can describe, but there’s just something about the vibe of these images—and words—that feels off. It’s sort of creepy, but it’s mostly just the mediocrity that sits so uneasily with me.

These tools do an amazing job of solving the quantity problem—how to produce an image or piece of text quickly. And by most measurements, you could say that they also solve the quality problem. These outputs are good enough to pass for “the real thing.” The outputs are, like, 90% to 95% there. And the gap is closing.

And yet. There’s something in that gap. Something that I feel in my gut. Something that makes me go “nope.”

Creativity

It’s like a little mini conference season here in Brighton. Tomorrow is ffconf, which I’m really looking forward to. Last week was UX Brighton, which was thoroughly enjoyable.

Maybe it’s because the theme this year was all around creativity, but all of the UX Brighton speakers gave entertaining presentations. The topics of innovation and creativity were tackled from all kinds of different angles. I was having flashbacks to the Clearleft podcast episode on innovation—have a listen if you haven’t already.

As the day went on though, something was tickling at the back of my brain. Yes, it’s great to hear about ways to be more creative and unlock more innovation. But maybe there was something being left unsaid: finding novel ways of solving problems and meeting user needs should absolutely be done …once you’ve got your basics sorted out.

If your current offering is slow, hard to use, or inaccessible, that’s the place to prioritise time and investment. It doesn’t have to be at the expense of new initiatives: this can happen in parallel. But there’s no point spending all your efforts coming up with the most innovate lipstick for a pig.

On that note, I see that more and more companies are issuing breathless announcements about their new “innovative” “AI” offerings. All the veneer of creativity without any of the substance.

Crawlers

A few months back, I wrote about how Google is breaking its social contract with the web, harvesting our content not in order to send search traffic to relevant results, but to feed a large language model that will spew auto-completed sentences instead.

I still think Chris put it best:

I just think it’s fuckin’ rude.

When it comes to the crawlers that are ingesting our words to feed large language models, Neil Clarke describes the situtation:

It should be strictly opt-in. No one should be required to provide their work for free to any person or organization. The online community is under no responsibility to help them create their products. Some will declare that I am “Anti-AI” for saying such things, but that would be a misrepresentation. I am not declaring that these systems should be torn down, simply that their developers aren’t entitled to our work. They can still build those systems with purchased or donated data.

Alas, the current situation is opt-out. The onus is on us to update our robots.txt file.

Neil handily provides the current list to add to your file. Pass it on:

User-agent: CCBot
Disallow: /

User-agent: ChatGPT-User
Disallow: /

User-agent: GPTBot
Disallow: /

User-agent: Google-Extended
Disallow: /

User-agent: Omgilibot
Disallow: /

User-agent: FacebookBot
Disallow: /

In theory you should be able to group those user agents together, but citation needed on whether that’s honoured everywhere:

User-agent: CCBot
User-agent: ChatGPT-User
User-agent: GPTBot
User-agent: Google-Extended
User-agent: Omgilibot
User-agent: FacebookBot
Disallow: /

There’s a bigger issue with robots.txt though. It too is a social contract. And as we’ve seen, when it comes to large language models, social contracts are being ripped up by the companies looking to feed their beasts.

As Jim says:

I realized why I hadn’t yet added any rules to my robots.txt: I have zero faith in it.

That realisation was prompted in part by Manuel Moreale’s experiment with blocking crawlers:

So, what’s the takeaway here? I guess that the vast majority of crawlers don’t give a shit about your robots.txt.

Time to up the ante. Neil’s post offers an option if you’re running Apache. Either in .htaccess or in a .conf file, you can block user agents using mod_rewrite:

RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (CCBot|ChatGPT|GPTBot|Omgilibot| FacebookBot) [NC]
RewriteRule ^ – [F]

You’ll see that Google-Extended isn’t that list. It isn’t a crawler. Rather it’s the permissions model that Google have implemented for using your site’s content to train large language models: unless you opt out via robots.txt, it’s assumed that you’re totally fine with your content being used to feed their stochastic parrots.

Simon’s rule

I got a nice email from someone regarding my recent posts about performance on The Session. They said:

I hope this message finds you well. First and foremost, I want to express how impressed I am with the overall performance of https://thesession.org/. It’s a fantastic resource for music enthusiasts like me.

How nice! I responded, thanking them for the kind words.

They sent a follow-up clarification:

Awesome, anyway there was an issue in my message.

The line ‘It’s a fantastic resource for music enthusiasts like me.’ added by chatGPT and I didn’t notice.

I imagine this is what it feels like when you’re on a phone call with someone and towards the end of the call you hear a distinct flushing sound.

I wrote back and told them about Simon’s rule:

I will not publish anything that takes someone else longer to read than it took me to write.

That just feels so rude!

I think that’s a good rule.

Automation

I just described prototype code as code to be thrown away. On that topic…

I’ve been observing how people are programming with large language models and I’ve seen a few trends.

The first thing that just about everyone agrees on is that the code produced by a generative tool is not fit for public consumption. At least not straight away. It definitely needs to be checked and tested. If you enjoy debugging and doing code reviews, this might be right up your street.

The other option is to not use these tools for production code at all. Instead use them for throwaway code. That could be prototyping. But it could also be the code for those annoying admin tasks that you don’t do very often.

Take content migration. Say you need to grab a data dump, do some operations on the data to transform it in some way, and then pipe the results into a new content management system.

That’s almost certainly something you’d want to automate with bespoke code. Once the content migration is done, the code can be thrown away.

Read Matt’s account of coding up his Braggoscope. The code needed to spider a thousand web pages, extract data from those pages, find similarities, and output the newly-structured data in a different format.

I’ve noticed that these are just the kind of tasks that large language models are pretty good at. In effect you’re training the tool on your own very specific data and getting it to do your drudge work for you.

To me, it feels right that the usefulness happens on your own machine. You don’t put the machine-generated code in front of other humans.

Permission

Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?

Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.

What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?

Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.

In short order, search came to rule the web. And Google came to rule search.

The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.

Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.

That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).

Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.

With AI, tech has broken the web’s social contract:

Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.

As Chris puts it:

Me, I just think it’s fuckin’ rude.

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

Ben proposes an update to robots.txt that would allow us to specify licensing information:

Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.

Like crawl my site only to provide search results not train your LLM.

It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.

Or do they?

There is still the nuclear option in robots.txt:

User-agent: Googlebot
Disallow: /

That’s what Vasilis is doing:

I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.

The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.

I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.

And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”

The situation with Google reminds me of what Robin said about Twitter:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

We can rescind our invitation to Google.

Talking about “web3” and “AI”

When I was hosting the DIBI conference in Edinburgh back in May, I moderated an impromptu panel on AI:

On the whole, it stayed quite grounded and mercifully free of hyperbole. Both speakers were treating the current crop of technologies as tools. Everyone agreed we were on the hype cycle, probably the peak of inflated expectations, looking forward to reaching the plateau of productivity.

Something else that happened at that event was that I met Deborah Dawton from the Design Business Association. She must’ve liked the cut of my jib because she invited me to come and speak at their get-together in Brighton on the topic of “AI, Web3 and design.”

The representative from the DBA who contacted me knew what they were letting themselves in for. They wrote:

I’ve read a few of your posts on the subject and it would be great if you could join us to share your perspectives.

How could I say no?

I’ve published a transcript of the short talk I gave.

Nailspotting

I’m sure you’ve heard the law of the instrument: when all you have is a hammer, everything looks like a nail.

There’s another side to it. If you’re selling hammers, you’ll depict a world full of nails.

Recent hammers include cryptobollocks and virtual reality. It wasn’t enough for blockchains and the metaverse to be potentially useful for some situations; they staked their reputations on being utterly transformative, disrupting absolutely every facet of life.

This kind of hype is a terrible strategy in the long-term. But if you can convince enough people in the short term, you can make a killing on the stock market. In truth, the technology itself is superfluous. It’s the hype that matters. And if the hype is over-inflated enough, you can even get your critics to do your work for you, broadcasting their fears about these supposedly world-changing technologies.

You’d think we’d learn. If an industry cries wolf enough times, surely we’d become less trusting of extraordinary claims. But the tech industry continues to cry wolf—or rather, “hammer!”—at regular intervals.

The latest hammer is machine learning, usually—incorrectly—referred to as Artificial Intelligence. What makes this hype cycle particularly infuriating is that there are genuine use cases. There are some nails for this hammer. They’re just not as plentiful as the breathless hype—both positive and negative—would have you believe.

When I was hosting the DiBi conference last week, there was a little section on generative “AI” tools. Matt Garbutt covered the visual side, demoing tools like Midjourney. Scott Salisbury covered the text side, showing how you can generate code. Afterwards we had a panel discussion.

During the panel I asked some fairly straightforward questions that nobody could answer. Who owns the input (the data used by these generative tools)? Who owns the output?

On the whole, it stayed quite grounded and mercifully free of hyperbole. Both speakers were treating the current crop of technologies as tools. Everyone agreed we were on the hype cycle, probably the peak of inflated expectations, looking forward to reaching the plateau of productivity.

Scott explicitly warned people off using generative tools for production code. His advice was to stick to side projects for now.

Matt took a closer look at where these tools could fit into your day-to-day design work. Mostly it was pretty sensible, except when he suggested that there could be any merit to using these tools as a replacement for user testing. That’s a terrible idea. A classic hammer/nail mismatch.

I think I moderated the panel reasonably well, but I have one regret. I wish I had first read Baldur Bjarnason’s new book, The Intelligence Illusion. I started reading it on the train journey back from Edinburgh but it would have been perfect for the panel.

The Intelligence Illusion is very level-headed. It is neither pro- nor anti-AI. Instead it takes a pragmatic look at both the benefits and the risks of using these tools in your business.

It has excellent advice for spotting genuine nails. For example:

Generative AI has impressive capabilities for converting and modifying seemingly unstructured data, such as prose, images, and audio. Using these tools for this purpose has less copyright risk, fewer legal risks, and is less error prone than using it to generate original output.

Think about transcripts of videos or podcasts—an excellent use of this technology. As Baldur puts it:

The safest and, probably, the most productive way to use generative AI is to not use it as generative AI. Instead, use it to explain, convert, or modify.

He also says:

Prefer internal tools over externally-facing chatbots.

That chimes with what I’ve been seeing. The most interesting uses of this technology that I’ve seen involve a constrained dataset. Like the way Luke trained a language model on his own content to create a useful chat interface.

Anyway, The Intelligence Illusion is full of practical down-to-earth advice based on plenty of research backed up with copious citations. I’m only halfway through it and it’s already helped me separate the hype from the reality.