Journal tags: trust

11

Filters

My phone rang today. I didn’t recognise the number so although I pressed the big button to answer the call, I didn’t say anything.

I didn’t say anything because usually when I get a call from a number I don’t know, it’s some automated spam. If I say nothing, the spam voice doesn’t activate.

But sometimes it’s not a spam call. Sometimes after a few seconds of silence a human at the other end of the call will say “Hello?” in an uncertain tone. That’s the point when I respond with a cheery “Hello!” of my own and feel bad for making this person endure those awkward seconds of silence.

Those spam calls have made me so suspicious that real people end up paying the price. False positives caught in my spam-detection filter.

Now it’s happening on the web.

I wrote about how Google search, Bing, and Mozilla Developer network are squandering trust:

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

But it’s not just limited to specific companies. I’ve noticed more and more suspicion related to any online activity.

I’ve seen members of a community site jump to the conclusion that a new member’s pattern of behaviour was a sure sign that this was a spambot. But it could just as easily have been the behaviour of someone who isn’t neurotypical or who doesn’t speak English as their first language.

Jessica was looking at some pictures on an AirBnB listing recently and found herself examining some photos that seemed a little too good to be true, questioning whether they were in fact output by some generative tool.

Every email that lands in my inbox is like a little mini Turing test. Did a human write this?

Our guard is up. Our filters are activated. Our default mode is suspicion.

This is most apparent with web search. We’ve always needed to filter search results through our own personal lenses, but now it’s like playing whack-a-mole. First we have to find workarounds for avoiding slop, and then when we click through to a web page, we have to evaluate whether’s it’s been generated by some SEO spammer making full use of the new breed of content-production tools.

There’s been a lot of hand-wringing about how this could spell doom for the web. I don’t think that’s necessarily true. It might well spell doom for web search, but I’m okay with that.

Back before its enshittification—an enshittification that started even before all the recent AI slop—Google solved the problem of accurate web searching with its PageRank algorithm. Before that, the only way to get to trusted information was to rely on humans.

Humans made directories like Yahoo! or DMOZ where they categorised links. Humans wrote blog posts where they linked to something that they, a human, vouched for as being genuinely interesting.

There was life before Google search. There will be life after Google search.

Look, there’s even a new directory devoted to cataloging blogs: websites made by humans. Life finds a way.

All of the spam and slop that’s making us so suspicious may end up giving us a new appreciation for human curation.

It wouldn’t be a straightforward transition to move away from search. It would be uncomfortable. It would require behaviour change. People don’t like change. But when needs must, people adapt.

The first bit of behaviour change might be a rediscovery of bookmarks. It used to be that when you found a source you trusted, you bookmarked it. Browsers still have bookmarking functionality but most people rely on search. Maybe it’s time for a bookmarking revival.

A step up from that would be using a feed reader. In many ways, a feed reader is a collection of bookmarks, but all of the bookmarks get polled regularly to see if there are any updates. I love using my feed reader. Everything I’ve subscribed to in there is made by humans.

The ultimate bookmark is an icon on the homescreen of your phone or in the dock of your desktop device. A human source you trust so much that you want it to be as accessible as any app.

Right now the discovery mechanism for that is woeful. I really want that to change. I want a web that empowers people to connect with other people they trust, without any intermediary gatekeepers.

The evangelists of large language models (who may coincidentally have invested heavily in the technology) like to proclaim that a slop-filled future is inevitable, as though we have no choice, as though we must simply accept enshittification as though it were a force of nature.

But we can always walk away.

Trust

In their rush to cram in “AI” “features”, it seems to me that many companies don’t actually understand why people use their products.

Google is acting as though its greatest asset is its search engine. Same with Bing.

Mozilla Developer Network is acting as though its greatest asset is its documentation. Same with Stack Overflow.

But their greatest asset is actually trust.

If I use a search engine I need to be able to trust that the filtering is good. If I look up documentation I need to trust that the information is good. I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”

“But”, the apologists will respond, “the results are mostly correct! The documentation is mostly true!”

Sure, but as Terence puts it:

The intern who files most things perfectly but has, more than once, tipped an entire cup of coffee into the filing cabinet is going to be remembered as “that klutzy intern we had to fire.”

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

I am honestly astonished that so many companies don’t seem to realise what they’re destroying.

border:none 2023

In 2013, I spoke at the border:none event in Nuremberg. I gave a talk called The Power of Simplicity.

It was a great little event. Most of the talks were, like mine, on technical topics; design, development, the usual conference material.

This year Joschi and Marc decided to have another border:none event ten years on from the first one. They invited back all the original speakers, as well as some new folks. They kept the ticket price the same as ten years ago—just thirty euros.

For us speakers from the previous event, the only brief they gave us was to consider what’s happened in the past decade. I played it pretty safe and talked about the web. I’ll post a transcript of my talk soon.

Some of the other speakers were far more ambitious. They spoke about themselves, the world, the meaning of life …my presentation was very tame in comparison.

I really, really admire the honesty and vulnerability that those people displayed. Tobias Baldauf in particular took my breath away. He delivered an intensely personal talk on generational trauma that was meticulously researched and took incredible bravery to deliver. It was worth going to Nuremberg just for the privilege of being present for that talk.

Other talks were refreshingly tech-free. There was a talk on cold-water swimming. There was a talk on paragliding. And I don’t mean they were saying “what designers can learn from cold-water swimming” or “how I became a better developer through paragliding.” The talks were literally about swimming and paragliding.

There was a great variety of speakers this time around, include age ranges from puberty to menopause (quite literally—that was the topic of one of the talks). I had the great pleasure of providing some coaching before the event to fifteen year old Maya who was delivering her first talk in English. She did a fantastic job! And the talk she gave—about how teachers in her school aren’t always trusting of the technology they provide to students—was directly relevant to what we’re seeing in the world of work. Give people autonomy, agency and trust.

There was a lot of trust at border:none. Everyone who bought a ticket did so on trust—they had no idea what to expect. Likewise, Marc and Joschi put their trust in the speakers. They gave the speakers the freedom to talk about whatever they wanted. That trust was repayed.

Florian took some superb pictures of the event. Matthias wrote up his experience. So did Tom. Valisis shared the gist of his excellent talk.

At the end of the event there was some joking about returning in 2033. I love the idea of a conference that happens once every ten years. Count me in!

Control

In two of my recent talks—In And Out Of Style and Design Principles For The Web—I finish by looking at three different components:

  1. a button,
  2. a dropdown, and
  3. a datepicker.

In each case you could use native HTML elements:

  1. button,
  2. select, and
  3. input type="date".

Or you could use divs with a whole bunch of JavaScript and ARIA.

In the case of a datepicker, I totally understand why you’d go for writing your own JavaScript and ARIA. The native HTML element is quite restricted, especially when it comes to styling.

In the case of a dropdown, it’s less clear-cut. Personally, I’d use a select element. While it’s currently impossible to style the open state of a select element, you can style the closed state with relative ease. That’s good enough for me.

Still, I can understand why that wouldn’t be good enough for some cases. If pixel-perfect consistency across platforms is a priority, then you’re going to have to break out the JavaScript and ARIA.

Personally, I think chasing pixel-perfect consistency across platforms isn’t even desirable, but I get it. I too would like to have more control over styling select elements. That’s one of the reasons why the work being done by the Open UI group is so important.

But there’s one more component: a button.

Again, you could use the native button element, or you could use a div or a span and add your own JavaScript and ARIA.

Now, in this case, I must admit that I just don’t get it. Why wouldn’t you just use the native button element? It has no styling issues and the browser gives you all the interactivity and accessibility out of the box.

I’ve been trying to understand the mindset of a developer who wouldn’t use a native button element. The easy answer would be that they’re just bad people, and dismiss them. But that would probably be lazy and inaccurate. Nobody sets out to make a website with poor performance or poor accessibility. And yet, by choosing not to use the native HTML element, that’s what’s likely to happen.

I think I might have finally figured out what might be going on in the mind of such a developer. I think the issue is one of control.

When I hear that there’s a native HTML element—like button or select—that comes with built-in behaviours around interaction and accessibility, I think “Great! That’s less work for me. I can just let the browser deal with it.” In other words, I relinquish control to the browser (though not entirely—I still want the styling to be under my control as much as possible).

But I now understand that someone else might hear that there’s a native HTML element—like button or select—that comes with built-in behaviours around interaction and accessibility, and think “Uh-oh! What if there unexpected side-effects of these built-in behaviours that might bite me on the ass?” In other words, they don’t trust the browsers enough to relinquish control.

I get it. I don’t agree. But I get it.

If your background is in computer science, then the ability to precisely predict how a programme will behave is a virtue. Any potential side-effects that aren’t within your control are undesirable. The only way to ensure that an interface will behave exactly as you want is to write it entirely from scratch, even if that means using more JavaScript and ARIA than is necessary.

But I don’t think it’s a great mindset for the web. The web is filled with uncertainties—browsers, devices, networks. You can’t possibly account for all of the possible variations. On the web, you have to relinquish some control.

Still, I’m glad that I now have a bit more insight into why someone would choose to attempt to retain control by using div, JavaScript and ARIA. It’s not what I would do, but I think I understand the motivation a bit better now.

Suspicion

I’ve already had some thoughtful responses to yesterday’s post about trust. I wrapped up my thoughts with a request:

I would love it if someone could explain why they avoid native browser features but use third-party code.

Chris obliged:

I can’t speak for the industry, but I have a guess. Third-party code (like the referenced Bootstrap and React) have a history of smoothing over significant cross-browser issues and providing better-than-browser ergonomic APIs. jQuery was created to smooth over cross-browser JavaScript problems. That’s trust.

Very true! jQuery is the canonical example of a library smoothing over the bumpy landscape of browser compatibilities. But jQuery is also the canonical example of a library we no longer need because the browsers have caught up …and those browsers support standards directly influenced by jQuery. That’s a library success story!

Charles Harries takes on my question in his post Libraries over browser features:

I think this perspective of trust has been hammered into developers over the past maybe like 5 years of JavaScript development based almost exclusively on inequality of browser feature support. Things are looking good in 2022; but as recently as 2019, 4 of the 5 top web developer needs had to do with browser compatibility.

Browser compatibility is one of the underlying promises that libraries—especially the big ones that Jeremy references, like React and Bootstrap—make to developers.

So again, it’s browser incompatibilities that made libraries attractive.

Jim Nielsen responds with the same message in his post Trusting Browsers:

We distrust the browser because we’ve been trained to. Years of fighting browser deficiencies where libraries filled the gaps. Browser enemy; library friend.

For example: jQuery did wonders to normalize working across browsers. Write code once, run it in any browser — confidently.

Three for three. My question has been answered: people gravitated towards libraries because browsers had inconsistent implementations.

I’m deliberately using the past tense there. I think Jim is onto something when he says that we’ve been trained not to trust browsers to have parity when it comes to supporting standards. But that has changed.

Charles again:

This approach isn’t a sustainable practice, and I’m trying to do as little of it as I can. Jeremy is right to be suspicious of third-party code. Cross-browser compatibility has gotten a lot better, and campaigns like Interop 2022 are doing a lot to reduce the burden. It’s getting better, but the exasperated I-just-want-it-to-work mindset is tough to uninstall.

I agree. Inertia is a powerful force. No matter how good cross-browser compatibility gets, it’s going to take a long time for developers to shed their suspicion.

Jim is glass-half-full kind of guy:

I’m optimistic that trust in browser-native features and APIs is being restored.

He also points to a very sensible mindset when it comes to third-party libraries and frameworks:

In this sense, third-party code and abstractions can be wonderful polyfills for the web platform. The idea being that the default posture should be: leverage as much of the web platform as possible, then where there are gaps to creating great user experiences, fill them in with exploratory library or framework features (features which, conceivably, could one day become native in browsers).

Yes! A kind of progressive enhancement approach to using third-party code makes a lot of sense. I’ve always maintained that you should treat libraries and frameworks like cattle, not pets. Don’t get too attached. If the library is solving a genuine need, it will be replaced by stable web standards in browsers (again, see jQuery).

I think that third-party libraries and frameworks work best as polyfills. But the whole point of polyfills is that you only use them when the browsers don’t supply features natively (and you also go back and remove the polyfill later when browsers do support the feature). But that’s not how people are using libraries and frameworks today. Developers are reaching for them by default instead of treating them as a last resort.

I like Jim’s proposed design princple:

Where available, default to browser-native features over third party code, abstractions, or idioms.

(P.S. It’s kind of lovely to see this kind of thoughtful blog-to-blog conversation happening. Right at a time when Twitter is about to go down the tubes, this is a demonstration of an actual public square with more nuanced discussion. Make your own website and join the conversation!)

Trust

I’ve noticed a strange mindset amongst front-end/full-stack developers. At least it seems strange to me. But maybe I’m the one with the strange mindset and everyone else knows something I don’t.

It’s to do with trust and suspicion.

I’ve made no secret of the fact that I’m suspicious of third-party code and dependencies in general. Every dependency you add to a project is one more potential single point of failure. You have to trust that the strangers who wrote that code knew what they were doing. I’m still somewhat flabbergasted that developers regularly add dependencies—via npm or yarn or whatever—that then pull in even more dependencies, all while assuming good faith and competence on the part of every person involved.

It’s a touching expression of faith in your fellow humans, but I’m not keen on the idea of faith-based development.

I’m much more trusting of native browser features—HTML elements, CSS features, and JavaScript APIs. They’re not always perfect, but a lot of thought goes into their development. By the time they land in browsers, a whole lot of smart people have kicked the tyres and considered many different angles. As a bonus, I don’t need to install them. Even better, end users don’t need to install them.

And yet, the mindset I’ve noticed is that many developers are suspicious of browser features but trusting of third-party libraries.

When I write and talk about using service workers, I often come across scepticism from developers about writing the service worker code. “Is there a library I can use?” they ask. “Well, yes” I reply, “but then you’ve got to understand the library, and the time it takes you to do that could be spent understanding the native code.” So even though a library might not offer any new functionality—just a different idion—many developers are more likely to trust the third-party library than they are to trust the underlying code that the third-party library is abstracting!

Developers are more likely to trust, say, Bootstrap than they are to trust CSS grid or custom properties. Developers are more likely to trust React than they are to trust web components.

On the one hand, I get it. Bootstrap and React are very popular. That popularity speaks volumes. If lots of people use a technology, it must be a safe bet, right?

But if we’re talking about popularity, every single browser today ships with support for features like grid, custom properties, service workers and web components. No third-party framework can even come close to that install base.

And the fact that these technologies have shipped in stable browsers means they’re vetted. They’ve been through a rigourous testing phase. They’ve effectively got a seal of approval from each individual browser maker. To me, that seems like a much bigger signal of trustworthiness than the popularity of a third-party library or framework.

So I’m kind of confused by this prevalent mindset of trusting third-party code more than built-in browser features.

Is it because of the job market? When recruiters are looking for developers, their laundry list is usually third-party technologies: React, Vue, Bootstrap, etc. It’s rare to find a job ad that lists native browser technologies: flexbox, grid, service workers, web components.

I would love it if someone could explain why they avoid native browser features but use third-party code.

Until then, I shall remain perplexed.

Facebook Container for Firefox

Firefox has a nifty extension—made by Mozilla—called Facebook Container. It does two things.

First of all, it sandboxes any of your activity while you’re on the facebook.com domain. The tab you’re in is isolated from all others.

Secondly, when you visit a site that loads a tracker from Facebook, the extension alerts you to its presence. For example, if a page has a share widget that would post to Facebook, a little fence icon appears over the widget warning you that Facebook will be able to track that activity.

It’s a nifty extension that I’ve been using for quite a while. Except now it’s gone completely haywire. That little fence icon is appearing all over the web wherever there’s a form with an email input. See, for example, the newsletter sign-up form in the footer of the Clearleft site. It’s happening on forms over on The Session too despite the rigourous-bordering-on-paranoid security restrictions in place there.

Hovering over the fence icon displays this text:

If you use your real email address here, Facebook may be able to track you.

That is, of course, false. It’s also really damaging. One of the worst things that you can do in the security space is to cry wolf. If a concerned user is told that they can ignore that warning, you’re lessening the impact of all warnings, even serious legitimate ones.

Sometimes false positives are an acceptable price to pay for overall increased security, but in this case, the rate of false positives can only decrease trust.

I tried to find out how to submit a bug report about this but I couldn’t work it out (and I certainly don’t want to file a bug report in a review) so I’m writing this in the hopes that somebody at Mozilla sees it.

What’s really worrying is that this might not be considered a bug. The release notes for the version of the extension that came out last week say:

Email fields will now show a prompt, alerting users about how Facebook can track users by their email address.

Like …all email fields? That’s ridiculous!

I thought the issue might’ve been fixed in the latest release that came out yesterday. The release notes say:

This release addresses fixes a issue from our last release – the email field prompt now only displays on sites where Facebook resources have been blocked.

But the behaviour is unfortunately still there, even on sites like The Session or Clearleft that wouldn’t touch Facebook resources with a barge pole. The fence icon continues to pop up all over the web.

I hope this gets sorted soon. I like the Facebook Container extension and I’d like to be able to recommend it to other people. Right now I’d recommed the opposite—don’t install this extension while it’s behaving so overzealously. If the current behaviour continues, I’ll be uninstalling this extension myself.

Update: It looks like a fix is being rolled out. Fingers crossed!

Owning Clearleft

Clearleft turned fifteen this year. We didn’t make a big deal of it. What with The Situation and all, it didn’t seem fitting to be self-congratulatory. Still, any agency that can survive for a decade and a half deserves some recognition.

Cassie marked the anniversary by designing and building a beautiful timeline of Clearleft’s history.

Here’s a post I wrote 15 years ago:

Most of you probably know this already, but I’ve joined forces with Andy and Richard. Collectively, we are known as Clearleft.

I didn’t make too much of a big deal of it back then. I think I was afraid I’d jinx it. I still kind of feel that way. Fifteen years of success? Beginner’s luck.

Despite being one of the three founders, I was never an owner of Clearleft. I let Andy and Rich take the risks and rewards on their shoulders while I take a salary, the same as any other employee.

But now, after fifteen years, I am also an owner of Clearleft.

So is Trys. And Cassie. And Benjamin. And everyone else at Clearleft.

Clearleft is now owned by an employee ownership trust. This isn’t like owning shares in a company—a common Silicon Valley honeypot. This is literally owning the company. Shares are transferable—this isn’t. As long as I’m an employee at Clearleft, I’m a part owner.

On a day-to-day basis, none of this makes much difference. Everyone continues to do great work, the same as before. The difference is in what happens to any profit produced as a result of that work. The owners decide what to do with that profit. The owners are us.

In most companies you’ve got a tension between a board representing the stakeholders and a union representing the workers. In the case of an employee ownership trust, the interests are one and the same. The stakeholders are the workers.

It’ll be fascinating to see how this plays out. Check back again in fifteen years.

Handing back control

An Event Apart Seattle was most excellent. This year, the AEA team are trying something different and making each event three days long. That’s a lot of mindblowing content!

What always fascinates me at events like these is the way that some themes seem to emerge, without any prior collusion between the speakers. This time, I felt that there was a strong thread of giving control directly to users:

Sarah and Margot both touched on this when talking about authenticity in brand messaging.

Margot described this in terms of vulnerability for the brand, but the kind of vulnerability that leads to trust.

Sarah talked about it in terms of respect—respecting the privacy of users, and respecting the way that they want to use your services. Call it compassion, call it empathy, or call it just good business sense, but providing these kind of controls in an interface is an excellent long-term strategy.

In Val’s animation talk, she did a deep dive into prefers-reduced-motion, a media query that deliberately hands control back to the user.

Even in a CSS-heavy talk like Jen’s, she took the time to explain why starting with meaningful markup is so important—it’s because you can’t control how the user will access your content. They may use tools like reader modes, or Pocket, or have web pages read aloud to them. The user has the final say, and rightly so.

In his CSS talk, Eric reminded us that a style sheet is a list of strong suggestions, not instructions.

Beth’s talk was probably the most explicit on the theme of returning control to users. She drew on examples from beyond the world of the web—from architecture, urban planning, and more—to show that the most successful systems are not imposed from the top down, but involve everyone, especially those most marginalised.

And even in my own talk on service workers, I raved about the design pattern of allowing users to save pages offline to read later. Instead of trying to guess what the user wants, give them the means to take control.

I was really encouraged to see this theme emerge. Mind you, when I look at the reality of most web products, it’s easy to get discouraged. Far from providing their users with controls over their own content, Instagram won’t even let their customers have a chronological feed. And Matt recently wrote about how both Twitter and Quora are heading further and further away from giving control to their users in his piece called Optimizing for outrage.

Still, I came away from An Event Apart Seattle with a renewed determination to do my part in giving people more control over the products and services we design and develop.

I spent the first two days of the conference trying to liveblog as much as I could. I find it really focuses my attention, although it’s also quite knackering. I didn’t do too badly; I managed to write cover eleven of the talks (out of the conference’s total of seventeen):

  1. Slow Design for an Anxious World by Jeffrey Zeldman
  2. Designing for Trust in an Uncertain World by Margot Bloomstein
  3. Designing for Personalities by Sarah Parmenter
  4. Generation Style by Eric Meyer
  5. Making Things Better: Redefining the Technical Possibilities of CSS by Rachel Andrew
  6. Designing Intrinsic Layouts by Jen Simmons
  7. How to Think Like a Front-End Developer by Chris Coyier
  8. From Ideation to Iteration: Design Thinking for Work and for Life by Una Kravets
  9. Move Fast and Don’t Break Things by Scott Jehl
  10. Mobile Planet by Luke Wroblewski
  11. Unsolved Problems by Beth Dean

Trust

My debit card is due to expire so my bank has sent me a new card to replace it. I’ve spent most of the day updating my billing details on various online services that I pay for with my card.

I’m sure I’ll forget about one or two. There’s the obvious stuff like Netflix and iTunes, but there are also the many services that I use to help keep my websites running smoothly:

But there’s one company that will not be receiving my new debit card details: Adobe. That’s not because of any high-and-mighty concerns I might have about monopolies on the design software market—their software is, mostly, pretty darn good (‘though I’m not keen on their Mafia-style pricing policy). No, the reason why I won’t give Adobe my financial details is that they have proven that they cannot be trusted:

We also believe the attackers removed from our systems certain information relating to 2.9 million Adobe customers, including customer names, encrypted credit or debit card numbers, expiration dates, and other information relating to customer orders.

The story broke two months ago. Everyone has mostly forgotten about it, like it’s no big deal. It is a big deal. It is a very big deal indeed.

I probably won’t be able to avoid using Adobe products completely; I might have to use some of their software at work. But I’ll be damned if they’re ever getting another penny out of me.

Identity and authority

When Richard talks, I listen. That’s a lesson I learned even before Clearleft existed. Right now Richard is talking about civility online mentioning the specific example of Digg—something I’ve touched on in the past.

If there’s any truth to the Greater Internet Fuckwad Theory then anonymity online can exacerbate the lack of civility. A key issue here is identity: you’re more likely to be rude or aggressive when posting an anonymous comment on a blog post than when you’re posting to your own blog—a place that’s associated with you and your online identity.

Just to be clear, when I talk about identity here I’m not talking about the issue of consolidating scattered online identities (a job for OpenID and, to a certain extent, microformats). I’m talking about identity as a basis for trust.

In order for an opinion to carry any weight online, the person posting needs to establish trust. A lot of the time this simply involves providing background material: “this is me, here are my photos, here are my bookmarks, etc.”

If you can’t provide a backstory, it’s becomes very hard to establish trust. Take for example the recent discourse on Flickr when some asshats ripped off Dan’s logo. To begin with, everyone was quite rightly joining the fray in support of Dan—with the exception of the Chief Executive Asshat from the rip-off company. But then some people showed up and started taking the side of the asshat. The other commentators did some quick’n’dirty background checks by simply clicking on the usernames and found empty photo pages. This lack of history pointed pretty strongly to these people simply being sock puppets.

But if your history establishes your identity and consequently your trustworthiness, then how can you instil trust if you’re just showing up to the party? As Kaliya was at pains to point out in her talk at the Web 2.0 Expo:

Trust is not an algorithm.

It’s important to realise that there’s a big difference between trust and authority. Trust is a personal judgement, different for everyone. Authority is a top-down value. There may well be an algorithm for authority—based on past achievements—but on the Web, authority isn’t nearly as important as trust.

Richard’s musings were prompted by an article in The Times that falls victim to the usual trap of mistaking a lack of authority with a lack of merit, citing the usual examples of Wikipedia and political blogs. The argument is based on the idea that someone who is paid to write (encyclopedias, newspapers, whatever) is likely to be more authoritative—and therefore trustworthy—than someone who writes merely because they have a passion for the subject. In my experience, the opposite is true.

Take some recent articles in The Independent:

These articles were written by journalists and so they have authority. Yet they are entirely without merit because the stories are sloppily-researched, hastily written and downright untrue. Authority, in this case, does not equate to merit. I am far more likely to trust a blog post by Ian Betteridge debunking the articles precisely because he wasn’t paid to write it.

The word “amateur” has come to mean “unprofessional and sloppy” in common parlance. But it wasn’t always that way. The word can also be used to refer to someone who does something out of passion and enthusiasm.

The problem with those articles in The Independent is not that they are amateurish: the problem is that they are professional.