Journal tags: lint

3

A minority report on artificial intelligence

Want to feel old? Steven Spielberg’s Minority Report was released fifteen years ago.

It casts a long shadow. For a decade after the film’s release, it was referenced at least once at every conference relating to human-computer interaction. Unsurprisingly, most of the focus has been on the technology in the film. The hardware and interfaces in Minority Report came out of a think tank assembled in pre-production. It provided plenty of fodder for technologists to mock and praise in subsequent years: gestural interfaces, autonomous cars, miniature drones, airpods, ubiquitous advertising and surveillance.

At the time of the film’s release, a lot of the discussion centred on picking apart the plot. The discussions had the same tone of time-travel paradoxes, the kind thrown up by films like Looper and Interstellar. But Minority Report isn’t a film about time travel, it’s a film about prediction.

Or rather, the plot is about prediction. The film—like so many great works of cinema—is about seeing. It’s packed with images of eyes, visions, fragments, and reflections.

The theme of prediction was rarely referenced by technologists in the subsequent years. After all, that aspect of the story—as opposed to the gadgets, gizmos, and interfaces—was one rooted in a fantastical conceit; the idea of people with precognitive abilities.

But if you replace that human element with machines, the central conceit starts to look all too plausible. It’s suggested right there in the film:

It helps not to think of them as human.

To which the response is:

No, they’re so much more than that.

Suppose that Agatha, Arthur, and Dashiell weren’t people in a floatation tank, but banks of servers packed with neural nets: the kinds of machines that are already making predictions on trading stocks and shares, traffic flows, mortgage applications …and, yes, crime.

Precogs are pattern recognition filters, that’s all.

Rewatching Minority Report now, it holds up very well indeed. Apart from the misstep of the final ten minutes, it’s a fast-paced twisty noir thriller. For all the attention to detail in its world-building and technology, the idea that may yet prove to be most prescient is the concept of Precrime, introduced in the original Philip K. Dick short story, The Minority Report.

Minority Report works today as a commentary on Artificial Intelligence …which is ironic given that Spielberg directed a film one year earlier ostensibly about A.I.. In truth, that film has little to say about technology …but much to say about humanity.

Like Minority Report, A.I. was very loosely based on an existing short story: Super-Toys Last All Summer Long by Brian Aldiss. It’s a perfectly-crafted short story that is deeply, almost unbearably, sad.

When I had the great privilege of interviewing Brian Aldiss, I tried to convey how much the story affected me.

Jeremy: …the short story is so sad, there’s such an incredible sadness to it that…

Brian: Well it’s psychological, that’s why. But I didn’t think it works as a movie; sadly, I have to say.

At the time of its release, the general consensus was that A.I. was a mess. It’s true. The film is a mess, but I think that, like Minority Report, it’s worth revisiting.

Watching now, A.I. feels like a horror film to me. The horror comes not—as we first suspect—from the artificial intelligence. The horror comes from the humans. I don’t mean the cruelty of the flesh fairs. I’m talking about the cruelty of Monica, who activates David’s unconditional love only to reject it (watching now, both scenes—the activation and the rejection—are equally horrific). Then there’s the cruelty of the people of who created an artificial person capable of deep, never-ending love, without considering the implications.

There is no robot uprising in the film. The machines want only to fulfil their purpose. But by the end of the film, the human race is gone and the descendants of the machines remain. Based on the conduct of humanity that we’re shown, it’s hard to mourn our species’ extinction. For a film that was panned for being overly sentimental, it is a thoroughly bleak assessment of what makes us human.

The question of what makes us human underpins A.I., Minority Report, and the short stories that spawned them. With distance, it gets easier to brush aside the technological trappings and see the bigger questions beneath. As Al Robertson writes, it’s about leaving the future behind:

SF’s most enduring works don’t live on because they accurately predict tomorrow. In fact, technologically speaking they’re very often wrong about it. They stay readable because they think about what change does to people and how we cope with it.

BEMphasis

I’m working on a project with a team of developers who are trying out the BEM syntax for their class names. I’ve tried BEM before, but I’m not a huge fan of underscores (for no particularly good reason) so I tend to use a modified version that avoids those characters. Still, when it comes to coding style—tabs vs. spaces, camelCasing, underscores, hyphens, or whatever—my personal opinion takes a back seat to the group consensus. And on this project, the group has opted for proper BEM all the way, and I’m more than happy to go along with that.

When it comes to naming a modified version of a component in BEM, the syntax looks like this:

component--modifier

That raises a question about how you then deploy that class name in your HTML. You could just use the modified name:

<element class="component--modifier">

But then in your CSS you’d have to repeat all the style rules for .component selector inside your rule block for .component--modifier selector. SASS could you help out here, especially with its “extends” functionality, but the final CSS is still going to containing duplicated rules.

The alternative is to keep your CSS lean and modular, and write your HTML like this:

<element class="component component--modifier">

Now you’ve taken the duplication out of CSS and put into your markup. It looks a little weird. But, on balance, it’s probably the lesser of two evils.

It strikes me that this pattern of always having the base component class name appear anywhere you have a component--modifier class name is something that you could programmatically check for. It should be relatively straightforward to write a lint tool that looks in the value of every class attribute and, if it finds any instances of foo--bar, checks to make sure that foo is also in there.

Sounds like it could be a nice little task for Grunt or Gulp. Maybe somebody has already made it.

Mind you, it seems that most lint tools out there are focused very much on enforcing a coding style for CSS and JavaScript—not so much for HTML. I worry that this reflects the mindset of many front-end developers who view CSS and JavaScript as more important than markup …which is a bit odd considering that CSS and JavaScript are subservient to the HTML document that they’re styling and scripting.

HTML5 watch

Keeping up with HTML5 can seem like a full-time job if you’re subscribed to both the W3C public-html list and the WHATWG mailing list.

If you have to choose just one, the WHATWG list is definitely the red pill. The W3C list has a very high volume of traffic, mostly about politics and procedure. Sam Ruby deserves a medal for keeping the whole thing on an even keel.

The WHATWG list, on the other hand, can get pretty nitty-gritty in its discussions of Web Workers, Offline Storage and other technologies that are completely over my head.

The specification itself is shaping up nicely. My list of bugbears is getting shorter and shorter:

  1. I’m still not convinced that the article element is necessary, given that it is almost indistinguishable from section. Having two very similar elements is potentially very confusing for authors. It’s hard enough deciding the difference between a section and a div.
  2. The time element is still unnecessarily restrictive. I don’t just mean that it’s restrictive in the sense that you can’t mark up a month, the very definition is too narrow. I hoisted the HTML5 spec by its own petard recently, pointing out that a different portion of the spec violates the definition of time.
  3. The cite element is also too restrictively defined, and in a backwards-incompatible way to boot. I’ve written more about that over on 24 Ways.

There are much bigger issues than these still outstanding—mostly related to the accessibility of audio, video and canvas—but I’ll leave it to smarter people than me to tackle those. My issues all revolve around semantics and, let’s face it, they’re kind of piddling little problems in the grand scheme of things.

On the whole, I’d say the spec is looking mighty fine. Most of it is ready for use today.

I think the next big challenge for HTML5 lies with the tools. It’s great that we’ve got a validator but what we really need is —something like JSlint but for checking markup writing style: case sensitivity of tags, quotes around attributes, that kind of thing. Robert Nyman concurs.

Let be clear: I’m not talking about a validator that checks for polyglot documents i.e. HTML that can also be parsed as XML. I’m talking purely about writing style and personal preference; a tool that will help enforce an in-house style guide of arbitrary “best” practices.

I’ve impressed this upon Henri in IRC on a few occasions. He has explained to me that it’s not so easy to build a true syntax checker …and no, you can’t just use regular expressions.

Still, I think that there would be enormous value in having even an imperfect tool to help authors who want to write HTML5 right now but also want to enforce a strict syntax on themselves. A working rough’n’ready lint tool that catches 80% of the most common gotchas is better than a theoretical perfect tool that will work 100% of the time but that currently works 0% of the time because it doesn’t exist yet.

Anybody want to step up to challenge?