Journal tags: ads

10

Ad revenue

It’s been dispiriting but unsurprising to see American commentators weigh in on the EU’s Digital Markets Act. I really wish they’d read Baldur’s excellent explainer first.

John has been doing his predictable “leave Britney alone!” schtick with regards to Apple (and in this case, Google and Facebook too). Ian Betteridge does an excellent job of setting him straight:

A lot of commentators seem to have the same issue as John: that it’s weird that a governmental body can or should define how products should be designed.

But governments mandate how products are designed all the time, and not just in the EU. Take another market which is pretty big: cars. All cars have to feature safety equipment, which varies from region to region but will broadly include everything from seatbelts to crumple zones. Cars have rules for emissions, for fuel efficiency, all of which are designing how a car should work.

But there’s one assumption in John’s post that Ian didn’t push back on. John said:

It’s certainly possible that Meta can devise ways to serve non-personalized contextual ads that generate sufficient revenue per user.

That comes with a footnote:

One obvious solution would be to show more ads — a lot more ads — to make up for the difference in revenue. So if contextual ads generate, say, one-tenth the revenue of targeted ads, Meta could show 10 times as many ads to users who opt out of targeting. I don’t think 10× is an outlandish multiplier there — given how remarkably profitable Meta’s advertising business is, it might even need to be higher than that.

It’s almost like an article of faith that behavioural advertising is more effective than contextual advertising. But there’s no data to support this. Quite the opposite. I wrote about this four years ago.

Once again, I urge you to read the excellent analysis by Jesse Frederik and Maurits Martijn.

There’s also Tim Hwang’s book, Subprime Attention Crisis:

From the unreliability of advertising numbers and the unregulated automation of advertising bidding wars, to the simple fact that online ads mostly fail to work, Hwang demonstrates that while consumers’ attention has never been more prized, the true value of that attention itself—much like subprime mortgages—is wildly misrepresented.

More recently Dave Karpf said what we’re all thinking:

The thing I want to stress about microtargeted ads is that the current version is perpetually trash, and we’re always just a few years away from the bugs getting worked out.

The EFF are calling for a ban. Should that happen, the sky would not fall. Contrary to what John thinks, revenue would not plummet. Contextual advertising works just fine …without the need for invasive surveillance and tracking.

Like I said:

Tracker-driven behavioural advertising is bad for users. The advertisements are irrelevant most of the time, and on the few occasions where the advertising hits the mark, it just feels creepy.

Tracker-driven behavioural advertising is bad for advertisers. They spend their hard-earned money on invasive ad tech that results in no more sales or brand recognition than if they had relied on good ol’ contextual advertising.

Tracker-driven behavioural advertising is very bad for the web. Megabytes of third-party JavaScript are injected at exactly the wrong moment to make for the worst possible performance. And if that doesn’t ruin the user experience enough, there are still invasive overlays and consent forms to click through (which, ironically, gets people mad at the legislation—like GDPR—instead of the underlying reason for these annoying overlays: unnecessary surveillance and tracking by the site you’re visiting).

Alt writing

I made the website for this year’s UX London by hand.

Well, that’s not entirely true. There’s exactly one build tool involved. I’m using Sergey to include global elements—the header and footer—something that’s still not possible in HTML.

So it’s minium viable static site generation rather than actual static files. It’s still very hands-on though and I enjoy that a lot; editing HTML and CSS directly without intermediary tools.

When I update the site, it’s usually to add a new speaker to the line-up (well, not any more now that the line up is complete). That involves marking up their bio and talk description. I also create a couple of different sized versions of their headshot to use with srcset. And of course I write an alt attribute to accompany that image.

By the way, Jake has an excellent article on writing alt text that uses the specific example of a conference site. It raises some very thought-provoking questions.

I enjoy writing alt text. I recently described how I updated my posting interface here on my own site to put a textarea for alt text front and centre for my notes with photos. Since then I’ve been enjoying the creative challenge of writing useful—but also evocative—alt text.

Some recent examples:

But when I was writing the alt text for the headshots on the UX London site, I started to feel a little disheartened. The more speakers were added to the line-up, the more I felt like I was repeating myself with the alt text. After a while they all seemed to be some variation on “This person looking at the camera, smiling” with maybe some detail on their hair or clothing.

  • Videha Sharma
    The beaming bearded face of Videha standing in front of the beautiful landscape of a riverbank.
  • Candi Williams
    Candi working on her laptop, looking at the camera with a smile.
  • Emma Parnell
    Emma smiling against a yellow background. She’s wearing glasses and has long straight hair.
  • John Bevan
    A monochrome portrait of John with a wry smile on his face, wearing a black turtleneck in the clichéd design tradition.
  • Laura Yarrow
    Laura smiling, wearing a chartreuse coloured top.
  • Adekunle Oduye
    A profile shot of Adekunle wearing a jacket and baseball cap standing outside.

The more speakers were added to the line-up, the harder I found it not to repeat myself. I wondered if this was all going to sound very same-y to anyone hearing them read aloud.

But then I realised, “Wait …these are kind of same-y images.”

By the very nature of the images—headshots of speakers—there wasn’t ever going to be that much visual variation. The experience of a sighted person looking at a page full of speakers is that after a while the images kind of blend together. So if the alt text also starts to sound a bit repetitive after a while, maybe that’s not such a bad thing. A screen reader user would be getting an equivalent experience.

That doesn’t mean it’s okay to have the same alt text for each image—they are all still different. But after I had that realisation I stopped being too hard on myself if I couldn’t come up with a completely new and original way to write the alt text.

And, I remind myself, writing alt text is like any other kind of writing. The more you do it, the better you get.

Upgrades and polyfills

I started getting some emails recently from people having issues using The Session. The issues sounded similar—an interactive component that wasn’t, well …interacting.

When I asked what device or browser they were using, the answer came back the same: Safari on iPad. But not a new iPad. These were older iPads running older operating systems.

Now, remember, even if I wanted to recommend that they use a different browser, that’s not an option:

Safari is the only browser on iOS devices.

I don’t mean it’s the only browser that ships with iOS devices. I mean it’s the only browser that can be installed on iOS devices.

You can install something called Chrome. You can install something called Firefox. Those aren’t different web browsers. Under the hood they’re using Safari’s rendering engine. They have to.

It gets worse. Not only is there no choice when it comes to rendering engines on iOS, but the rendering engine is also tied to the operating system.

If you’re on an old Apple laptop, you can at least install an up-to-date version of Firefox or Chrome. But you can’t install an up-to-date version of Safari. An up-to-date version of Safari requires an up-to-date version of the operating system.

It’s the same on iOS devices—you can’t install a newer version of Safari without installing a newer version of iOS. But unlike the laptop scenario, you can’t install any version of Firefox of Chrome.

It’s disgraceful.

It’s particularly frustrating when an older device can’t upgrade its operating system. Upgrades for Operating system generally have some hardware requirements. If your device doesn’t meet those requirements, you can’t upgrade your operating system. That wouldn’t matter so much except for the Safari issue. Without an upgraded operating system, your web browsing experience stagnates unnecessarily.

For want of a nail

  • A website feature isn’t working so
  • you need to upgrade your browser which means
  • you need to upgrade your operating sytem but
  • you can’t upgrade your operating system so
  • you need to buy a new device.

Apple doesn’t allow other browsers to be installed on iOS devices so people have to buy new devices if they want to use the web. Handy for Apple. Bad for users. Really bad for the planet.

It’s particularly galling when it comes to iPads. Those are exactly the kind of casual-use devices that shouldn’t need to be caught in the wasteful cycle of being used for a while before getting thrown away. I mean, I get why you might want to have a relatively modern phone—a device that’s constantly with you that you use all the time—but an iPad is the perfect device to just have lying around. You shouldn’t feel pressured to have the latest model if the older version still does the job:

An older tablet makes a great tableside companion in your living room, an effective e-book reader, or a light-duty device for reading mail or checking your favorite websites.

Hang on, though. There’s another angle to this. Why should a website demand an up-to-date browser? If the website has been built using the tried and tested approach of progressive enhancement, then everyone should be able to achieve their goals regardless of what browser or device or operating system they’re using.

On The Session, I’m using progressive enhancement and feature detection everywhere I can. If, for example, I’ve got some JavaScript that’s going to use querySelectorAll and addEventListener, I’ll first test that those methods are available.

if (!document.querySelectorAll || !window.addEventListener) {
  // doesn't cut the mustard.
  return;
}

I try not to assume that anything is supported. So why was I getting emails from people with older iPads describing an interaction that wasn’t working? A JavaScript error was being thrown somewhere and—because of JavaScript’s brittle error-handling—that was causing all the subsequent JavaScript to fail.

I tracked the problem down to a function that was using some DOM methods—matches and closest—as well as the relatively recent JavaScript forEach method. But I had polyfills in place for all of those. Here’s the polyfill I’m using for matches and closest. And here’s the polyfill I’m using for forEach.

Then I spotted the problem. I was using forEach to loop through the results of querySelectorAll. But the polyfill works on arrays. Technically, the output of querySelectorAll isn’t an array. It looks like an array, it quacks like an array, but it’s actually a node list.

So I added this polyfill from Chris Ferdinandi.

That did the trick. I checked with the people with those older iPads and everything is now working just fine.

For the record, here’s the small collection of polyfills I’m using. Polyfills are supposed to be temporary. At some stage, as everyone upgrades their browsers, I should be able to remove them. But as long as some people are stuck with using an older browser, I have to keep those polyfills around.

I wish that Apple would allow other rendering engines to be installed on iOS devices. But if that’s a hell-freezing-over prospect, I wish that Safari updates weren’t tied to operating system updates.

Apple may argue that their browser rendering engine and their operating system are deeply intertwingled. That line of defence worked out great for Microsoft in the ‘90s.

Netlify redirects and downloads

Making the Clearleft podcast is a lot of fun. Making the website for the Clearleft podcast was also fun.

Design wise, it’s a riff on the main Clearleft site in terms of typography and general layout. On the development side, it was an opportunity to try out an exciting tech stack. The workflow goes something like this:

  • Open a text editor and type out HTML and CSS.

Comparing this to other workflows I’ve used in the past, this is definitely the most productive way of working. Some stats:

  • Time spent setting up build tools: 00:00
  • Time spent wrangling the pipeline to do exactly what you want: 00:00
  • Time spent trying to get the damn build tools to work again when you return to the project after leaving it alone for more than a few months: 00:00:00

I have some files. Some images, three font files, a few pages of HTML, one RSS feed, one style sheet, and one minimal service worker script. I don’t need a web server to do anything more than serve up those files. No need for any dynamic server-side processing.

I guess this is JAMstack. Though, given that the J stands for JavaScript, the A stands for APIs, and I’m not using either, technically it’s Mstack.

Netlify suits my hosting needs nicely. It also provides the added benefit that, should I need to update my CSS, I don’t need to add a query string or anything to the link elements in the HTML that point to the style sheet: Netlify does cache invalidation for you!

The mp3 files of the actual podcast episodes are stored on S3. I link to those mp3 files from enclosure elements in the RSS feed, which is what makes it a podcast. I also point to the mp3 files from audio elements on the individual episode pages—just above the transcript of each episode. Here’s the page for the most recent episode.

I also want people to be able to download the mp3 file directly if they want (or if they want to huffduff an episode). So I provide a link to the mp3 file with a good ol’-fashioned a element with an href attribute.

I throw in one more attribute on that link. The download attribute tells the browser that the URL in the href attribute should be downloaded instead of visited. If you give a value for the download attribute, it will over-ride the file name:

<a href="/files/ugly-file-name.xyz" download="nice-file-name.xyz">download</a>

Or you can use it as a Boolean attribute without any value if you’re happy with the file name:

<a href="/files/nice-file-name.xyz" download>download</a>

There’s one catch though. The download attribute only works for files on the same origin. That’s an issue for me. My site is podcast.clearleft.com but my audio files are hosted on clearleft-audio.s3.amazonaws.com—the download attribute will be ignored and the mp3 files will play in the browser instead of downloading.

Trys pointed me to the solution. It turns out that Netlify can do some server-side processing. It can do redirects.

I added a file called _redirects to the root of my project. It contains one line:

/download/*  https://clearleft-audio.s3.amazonaws.com/podcast/:splat  200

That says that any URLs beginning with /download/ should redirect to clearleft-audio.s3.amazonaws.com/podcast/. Everything after the closing slash is captured with that wild card asterisk. That’s then passed along to the redirect URL as :splat. That’s a new one on me. I hadn’t come across that terminology, but as someone who can never remember the syntax of regular expressions, it works for me.

Oh, and the 200at the end is the status code: okay.

Now I can use this /download/ path in my link:

<a href="/download/season01episode06.mp3" download>Download mp3</a>

Because this URL on the same origin, the download attribute works just fine.

Navigation preloads in service workers

There’s a feature in service workers called navigation preloads. It’s relatively recent, so it isn’t supported in every browser, but it’s still well worth using.

Here’s the problem it solves…

If someone makes a return visit to your site, and the service worker you installed on their machine isn’t active yet, the service worker boots up, and then executes its instructions. If those instructions say “fetch the page from the network”, then you’re basically telling the browser to do what it would’ve done anyway if there were no service worker installed. The only difference is that there’s been a slight delay because the service worker had to boot up first.

  1. The service worker activates.
  2. The service worker fetches the file.
  3. The service worker does something with the response.

It’s not a massive performance hit, but it’s still a bit annoying. It would be better if the service worker could boot up and still be requesting the page at the same time, like it would do if no service worker were present. That’s where navigation preloads come in.

  1. The service worker activates while simultaneously requesting the file.
  2. The service worker does something with the response.

Navigation preloads—like the name suggests—are only initiated when someone navigates to a URL on your site, either by following a link, or a bookmark, or by typing a URL directly into a browser. Navigation preloads don’t apply to requests made by a web page for things like images, style sheets, and scripts. By the time a request is made for one of those, the service worker is already up and running.

To enable navigation preloads, call the enable() method on registration.navigationPreload during the activate event in your service worker script. But first do a little feature detection to make sure registration.navigationPreload exists in this browser:

if (registration.navigationPreload) {
  addEventListener('activate', activateEvent => {
    activateEvent.waitUntil(
      registration.navigationPreload.enable()
    );
  });
}

If you’ve already got event listeners on the activate event, that’s absolutely fine: addEventListener isn’t exclusive—you can use it to assign multiple tasks to the same event.

Now you need to make use of navigation preloads when you’re responding to fetch events. So if your strategy is to look in the cache first, there’s probably no point enabling navigation preloads. But if your default strategy is to fetch a page from the network, this will help.

Let’s say your current strategy for handling page requests looks like this:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    fetchEvent.respondWith(
      fetch(request)
      .then( responseFromFetch => {
        // maybe cache this response for later here.
        return responseFromFetch;
      })
      .catch( fetchError => {
        return caches.match(request)
        .then( responseFromCache => {
          return responseFromCache || caches.match('/offline');
        });
      })
    );
  }
});

That’s a fairly standard strategy: try the network first; if that doesn’t work, try the cache; as a last resort, show an offline page.

It’s that first step (“try the network first”) that can benefit from navigation preloads. If a preload request is already in flight, you’ll want to use that instead of firing off a new fetch request. Otherwise you’re making two requests for the same file.

To find out if a preload request is underway, you can check for the existence of the preloadResponse promise, which will be made available as a property of the fetch event you’re handling:

fetchEvent.preloadResponse

If that exists, you’ll want to use it instead of fetch(request).

if (fetchEvent.preloadResponse) {
  // do something with fetchEvent.preloadResponse
} else {
  // do something with fetch(request)
}

You could structure your code like this:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    if (fetchEvent.preloadResponse) {
      fetchEvent.respondWith(
        fetchEvent.preloadResponse
        .then( responseFromPreload => {
          // maybe cache this response for later here.
          return responseFromPreload;
        })
        .catch( preloadError => {
          return caches.match(request)
          .then( responseFromCache => {
            return responseFromCache || caches.match('/offline');
          });
        })
      );
    } else {
      fetchEvent.respondWith(
        fetch(request)
        .then( responseFromFetch => {
          // maybe cache this response for later here.
          return responseFromFetch;
        })
        .catch( fetchError => {
          return caches.match(request)
          .then( responseFromCache => {
            return responseFromCache || caches.match('/offline');
          });
        })
      );
    }
  }
});

But that’s not very DRY. Your logic is identical, regardless of whether the response is coming from fetch(request) or from fetchEvent.preloadResponse. It would be better if you could minimise the amount of duplication.

One way of doing that is to abstract away the promise you’re going to use into a variable. Let’s call it retrieve. If a preload is underway, we’ll assign it to that variable:

let retrieve;
if (fetchEvent.preloadResponse) {
  retrieve = fetchEvent.preloadResponse;
}

If there is no preload happening (or this browser doesn’t support it), assign a regular fetch request to the retrieve variable:

let retrieve;
if (fetchEvent.preloadResponse) {
  retrieve = fetchEvent.preloadResponse;
} else {
  retrieve = fetch(request);
}

If you like, you can squash that into a ternary operator:

const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);

Use whichever syntax you find more readable.

Now you can apply the same logic, regardless of whether retrieve is a preload navigation or a fetch request:

addEventListener('fetch', fetchEvent => {
  const request = fetchEvent.request;
  if (request.headers.get('Accept').includes('text/html')) {
    const retrieve = fetchEvent.preloadResponse ? fetchEvent.preloadResponse : fetch(request);
    fetchEvent.respondWith(
      retrieve
      .then( responseFromRetrieve => {
        // maybe cache this response for later here.
       return responseFromRetrieve;
      })
      .catch( fetchError => {
        return caches.match(request)
        .then( responseFromCache => {
          return responseFromCache || caches.match('/offline');
        });
      })
    );
  }
});

I think that’s the least invasive way to update your existing service worker script to take advantage of navigation preloads.

Like I said, preload navigations can give a bit of a performance boost if you’re using a network-first strategy. That’s what I’m doing here on adactio.com and on thesession.org so I’ve updated their service workers to take advantage of navigation preloads. But on Resilient Web Design, which uses a cache-first strategy, there wouldn’t be much point enabling navigation preloads.

Jeff Posnick made this point in his write-up of bringing service workers to Google search:

Adding a service worker to your web app means inserting an additional piece of JavaScript that needs to be loaded and executed before your web app gets responses to its requests. If those responses end up coming from a local cache rather than from the network, then the overhead of running the service worker is usually negligible in comparison to the performance win from going cache-first. But if you know that your service worker always has to consult the network when handling navigation requests, using navigation preload is a crucial performance win.

Oh, and those browsers that don’t yet support navigation preloads? No problem. It’s a progressive enhancement. Everything still works just like it did before. And having a service worker on your site in the first place is itself a progressive enhancement. So enabling navigation preloads is like a progressive enhancement within a progressive enhancement. It’s progressive enhancements all the way down!

By the way, if all of this service worker stuff sounds like gibberish, but you wish you understood it, I think my book, Going Offline, will prove quite valuable.

Singapore

I was in Singapore last week. It was most relaxing. Sure, it’s Disneyland With The Death Penalty but the food is wonderful.

chicken rice fishball noodles laksa grilled pork

But I wasn’t just there to sample the delights of the hawker centres. I had been invited by Mozilla to join them on the opening leg of their Developer Roadshow. We assembled in the PayPal offices one evening for a rapid-fire round of talks on emerging technologies.

We got an introduction to Quantum, the new rendering engine in Firefox. It’s looking good. And fast. Oh, and we finally get support for input type="date".

But this wasn’t a product pitch. Most of the talks were by non-Mozillians working on the cutting edge of technologies. I kicked things off with a slimmed-down version of my talk on evaluating technology. Then we heard from experts in everything from CSS to VR.

The highlight for me was meeting Hui Jing and watching her presentation on CSS layout. It was fantastic! Entertaining and informative, it was presented with gusto. I think it got everyone in the room very excited about CSS Grid.

The Singapore stop was the only I was able to make, but Hui Jing has been chronicling the whole trip. Sounds like quite a whirlwind tour. I’m so glad I was able to join in even for a portion. Thanks to Sandra and Ali for inviting me along—much appreciated.

I’ll also be speaking at Mozilla’s View Source in London in a few weeks, where I’ll be talking about building blocks of the Indie Web:

In these times of centralised services like Facebook, Twitter, and Medium, having your own website is downright disruptive. If you care about the longevity of your online presence, independent publishing is the way to go. But how can you get all the benefits of those third-party services while still owning your own data? By using the building blocks of the Indie Web, that’s how!

‘Twould be lovely to see you there.

On The Verge

Quite a few people have been linking to an article on The Verge with the inflammatory title The Mobile web sucks. In it, Nilay Patel heaps blame upon mobile browsers, Safari in particular:

But man, the web browsers on phones are terrible. They are an abomination of bad user experience, poor performance, and overall disdain for the open web that kicked off the modern tech revolution.

Les Orchard says what we’re all thinking in his detailed response The Verge’s web sucks:

Calling out browser makers for the performance of sites like his? That’s a bit much.

Nilay does acknowledge that the Verge could do better:

Now, I happen to work at a media company, and I happen to run a website that can be bloated and slow. Some of this is our fault: The Verge is ultra-complicated, we have huge images, and we serve ads from our own direct sales and a variety of programmatic networks.

But still, it sounds like the buck is being passed along. The performance issues are being treated as Somebody Else’s Problem …ad networks, trackers, etc.

The developers at Vox Media take a different, and in my opinion, more correct view. They’re declaring performance bankruptcy:

I mean, let’s cut to the chase here… our sites are friggin’ slow, okay!

But I worry about how they can possibly reconcile their desire for a faster website with a culture that accepts enormously bloated ads and trackers as the inevitable price of doing business on the web:

I’m hearing an awful lot of false dichotomies here: either you can have a performant website or you have a business model based on advertising. Here’s another false dichotomy:

If the message coming down from above is that performance concerns and business concerns are fundamentally at odds, then I just don’t know how the developers are ever going to create a culture of performance (which is a real shame, because they sound like a great bunch). It’s a particularly bizarre false dichotomy to be foisting when you consider that all the evidence points to performance as being a key differentiator when it comes to making moolah.

It’s funny, but I take almost the opposite view that Nilay puts forth in his original article. Instead of thinking “Oh, why won’t these awful browsers improve to be better at delivering our websites?”, I tend to think “Oh, why won’t these awful websites improve to be better at taking advantage of our browsers?” After all, it doesn’t seem like that long ago that web browsers on mobile really were awful; incapable of rendering the “real” web, instead only able to deal with WAP.

As Maciej says in his magnificent presentation Web Design: The First 100 Years:

As soon as a system shows signs of performance, developers will add enough abstraction to make it borderline unusable. Software forever remains at the limits of what people will put up with. Developers and designers together create overweight systems in hopes that the hardware will catch up in time and cover their mistakes.

We complained for years that browsers couldn’t do layout and javascript consistently. As soon as that got fixed, we got busy writing libraries that reimplemented the browser within itself, only slower.

I fear that if Nilay got his wish and mobile browsers made a quantum leap in performance tomorrow, the result would be even more bloated JavaScript for even more ads and trackers on websites like The Verge.

If anything, browser makers might have to take more drastic steps to route around the damage of bloated websites with invasive tracking.

We’ve been here before. When JavaScript first landed in web browsers, it was quickly adopted for three primary use cases:

  1. swapping out images when the user moused over a link,
  2. doing really bad client-side form validation, and
  3. spawning pop-up windows.

The first use case was so popular, it was moved from a procedural language (JavaScript) to a declarative language (CSS). The second use case is still with us today. The third use case was solved by browsers. They added a preference to block unwanted pop-ups.

Tracking and advertising scripts are today’s equivalent of pop-up windows. There are already plenty of tools out there to route around their damage: Ghostery, Adblock Plus, etc., along with tools like Instapaper, Readability, and Pocket.

I’m sure that business owners felt the same way about pop-up ads back in the late ’90s. Just the price of doing business. Shrug shoulders. Just the way things are. Nothing we can do to change that.

For such a young, supposedly-innovative industry, I’m often amazed at what people choose to treat as immovable, unchangeable, carved-in-stone issues. Bloated, invasive ad tracking isn’t a law of nature. It’s a choice. We can choose to change.

Every bloated advertising and tracking script on a website was added by a person. What if that person refused? I guess that person would be fired and another person would be told to add the script. What if that person refused? What if we had a web developer picket line that we collectively refused to cross?

That’s an unrealistic, drastic suggestion. But the way that the web is being destroyed by our collective culpability calls for drastic measures.

By the way, the pop-up ad was first created by Ethan Zuckerman. He has since apologised. What will you be apologising for in decades to come?

Reckoning

When I gave a talk with Andy at South by Southwest I jokingly made reference to the Greater Internet Fuckwad Theory. After reading Kathy Sierra’s last post, that joke isn’t funny anymore.

I just want to add my voice to the chorus of right-minded people who find this situation unacceptable. I wish there was some way I could channel my anger into something that would help. In the meantime, I can only echo Tim Bray’s call:

Anyone who is remotely connected with the people doing this needs to dig that story up and tell the community.

There will be a reckoning (and I don’t mean that in some wishy-washy karmic afterlife way).

Gillian McKeith is not a doctor

I don’t like contributing something as simple as “me too!” but I just had to +1 Tom’s post on Ben Goldacre on Gillian McKeith. As he puts it:

There are times when I feel that Ben Goldacre—author of the Guardian’s Bad Science column—should be knighted.

I couldn’t agree more. Be sure to visit his website, Bad Science. As a fan of popular science—by which I mean fascinating subjects made accessible to plebs like me—I applaud Ben Goldacre’s sysyphian work in calling the British press on their over-reliance on pseudo-science. His tireless work on exposing the junk science behind the anti-MMR stories alone deserves everyone’s respect and gratitude.

His latest column, A menace to science quite rightly exposes Gillian McKeith—the TV presenter with a surname worryingly similar to my own—as the crackpot that she is. The article concentrates on her ludicrous “scientific” claims rather than focusing on the side-issue that she is completely unqualified, but I’ve decided to title this post Gillian McKeith is not a doctor for the benefit of future Googlers. It’s official:

A regular from my website badscience.net — I can barely contain my pride — took McKeith to the Advertising Standards Authority, complaining about her using the title “doctor” on the basis of a qualification gained by correspondence course from a non-accredited American college. He won.

With any luck, I’ll receive one of McKeith’s famous cease-and-desist threats.

In other news from Tom, he’s feeling mightily jetlagged, the poor bastard. Having just flown back from Vancouver—a time zone difference of eight hours—I should be in a position to commiserate. But, touch wood, I seem to have mercifully escaped the ravages of jetlag.

And debate goes on

The RSS vine is humming with point and counterpoint this week.

Adobe revealed their new range of icons, based on mashing up a colour wheel with the periodic table of the elements. Lots of people don’t like ‘em: Stan doesn’t; Dave doesn’t. Some people do like ‘em: Veerle does. I can’t say I’m all that keen on them but I honestly can’t muster up much strength of conviction either way.

Let us leave the designers for a moment and cast our gaze upon the hot topic amongst the techy crowd…

Dave Winer looked at JSON and didn’t like what he saw:

Gotta love em, because there’s no way they’re going to stop breaking what works, and fixing what don’t need no fixing.

James Bennet wrote an excellent response:

Of course, this ignores the fact that the Lisp folks have been making the same argument for years, wondering why there was this great pressing need to go out and invent XML when s-expressions were just dandy.

The debate continues over on Scripting.com, where the best comment comes from Douglas Crockford:

The good thing about reinventing the wheel is that you can get a round one.

The discussion continues. Be it icons or data formats, the discourse remains remarkably civil. Perhaps it’s the seasonal spirit of goodwill. Whatever happened to the good ol’ “Mac vs. Windows”-style flame wars?

In contrast, Roger has posted a refreshingly curmudgeonesque list entitled Six things that suck about the Web in 2006. He had me nodding my head in vigourous agreement with point number six:

Over-wide, fixed width layouts. Go wide if you must. Use a fixed width if you don’t know how to make a flexible layout. But don’t do both. Horizontal scrolling, no thanks.

Perhaps I should post my own list of things about the Web that suck, but I fear it would be a never-ending roster. Instead I’ll restrict myself to one single thing, specifically related to blogs:

Ads on blogs. They suck. I find them disrespectful; like going into somebody’s house for a nice cup of tea only to have them try to flog you a nice set of encyclopedias.

Just to be clear: ads on commercial sites (magazines, resources, whatever) I understand. But on a personal site, they bring down the tone far more than any use of typography, colour or layout could ever offset.

I used to wonder why people put those “Digg this” or “Delicious this” links on their blog posts. I couldn’t see the point. But combined with google ads, I guess they make sense. They’re a way of driving traffic, eyeballs, click-through and by extension, filthy lucre. That’s fine… as long as you don’t mind being a whore.

Remember the term “Cam whore?”:

A Cam whore is a term for people who expose themselves on the Internet with webcam software in exchange for goods, usually via enticing viewers to purchase items on their wishlists or add to their online accounts.

I think it’s high time we coined the term “Blog whore” to describe people who slap google ads all over a medium intended for personal expression.

Alas, most of my friends, colleagues and co-workers are Blog whores. Scrivs manages to be Blog whore, Digg whore and pimp all at the same time with his 9 Rules bitches. In his recent round-up of blog designs, he says of Shaun’s site:

In a perfect world there are no ads, but we don’t live in that kind of world yet for the time being we can escape to the land of make believe when visiting Inman’s site.

Well, I see no reason why we can’t all live in that perfect world. In the style of Robert’s ludicrously provocative hyperbole, I hereby declare that a blog with ads isn’t really a blog. So there.

Ah, that’s better. There’s nothing like a good rant to counteract all that civilised discourse.

Happy holidays, Blog whores!