Why do people focus on AI's failures?


I saw a prominent AI proponent asking why people always focus on the things that AI gets wrong. AI works so well, he asserted, that it was churlish and childish to focus on a few minor mistakes?

Which reminds me of an experience I had a few years ago. I was in a rural pub and got chatting to one of the locals. We were getting on great, so I asked him what his name was. "You know," he said, "I've built houses for everyone in this village, but do they call me John the Builder? No! I repaired everyone's cars when they broke down, but do they call me John the Mechanic? No! I was the one who helped design the new school, but do they call me John the Architect? No!"

He took a sip of beer, looked rueful, and sighed. "But you shag one sheep..."


What else is there to say? The intern who files most things perfectly but has, more than once, tipped an entire cup of coffee into the filing cabinet is going to be remembered as "that klutzy intern we had to fire."

Should we forgive and rehabilitate people? Sure, some of them. But if someone has repeatedly failed - sometimes in catastrophic ways - it's OK to discard them.

In my experience with various LLMs, they're getting better at imitating human-writing, but show no signs of improving when it comes to being able to reason. Their accuracy is demonstrably just as poor as it has ever been. Even my Alexa gets things wrong as often as right.

Anyway, I asked ChatGPT what it thought of the joke:

The punchline relies on the juxtaposition between the man's numerous, significant positive contributions to his community and the singular negative action that tarnishes his reputation. It illustrates how a single indiscretion can disproportionately impact how a person is perceived, despite their otherwise commendable actions.

Even a stopped clock is right twice a day.


Share this post on…

11 thoughts on “Why do people focus on AI's failures?”

  1. Dragon Cotterill says:

    Is it the AI that's failing, or the human who created the prompt to generate the output from the AI that's failing?

    I think the biggest issue is that at present there isn't enough experience in the AI field to actually give it a satisfactory result. AI images still have issues with fingers and legs, so until that is fully resolved there will always be the issue of whether it's at fault.

    Textual output can also not be relied on without some oversight by a human. I think there are a few lawyers in the US who can attest to this. 🙂

    Reply
    1. @edent says:

      That's the "you're holding it wrong" argument. Didn't work for iPhones, won't work for AI.

      Reply
      1. Matt Terenzio says:

        Yes, true, but for a college math test where calculators are allowed, the calculator isn't at fault for the student getting the wrong answer, I use ChatGPT like a use search. Often wrong answers to my programming questions but I recognize it. Still often end up at the solution quicker than other means of research. It shouldn't be called AI

        Reply
  2. said on infosec.exchange:

    @simon @Edent

    It's right, but it's only talking about the second half of the word.

    Try phrasing the question as, how many R's total are there in the entire word raspberry.

    Edit: nope, it isn't enough. It's too hard coded into it's language that 2 r's and raspberry go together. It would take some creative language to break it out of that.

    Edit Edit:
    This is fascinating. I have used AI quite a bit and I have never seen such an error like this. You can't even talk it out of the error which I'm normally able to do. It simply cannot grasp that there are 3 R's in raspberry, even if you talk it through it. This is it's own class of error that I have never seen before.

    Reply | Reply to original comment on infosec.exchange

What are your reckons?