Notes From “You Are Not A Gadget”

Jaron Lanier’s book You Are Not a Gadget was written in 2010, but its preface is a prescient banger for 2024, the year of our AI overlord:

It's early in the 21st century, and that means that these words will mostly be read by nonpersons...[they] will be minced...within industrial cloud computing facilities...They will be scanned, rehashed, and misrepresented...Ultimately these words will contribute to the fortunes of those few who have been able to position themselves as lords of the computing clouds.

Today he might call the book, “You Are Not an Input to Artificial Intelligence”.

Lanier concludes the preface to his book by saying the words in it are intended for people, not computers.

Same for my blog! The words in it are meant for people, not computers. And I would hope any computerized representation of these words is solely for facilitating humans finding them and reading them in context.

Anyhow, here’s a few of my notes from the book.

So Long to The Individual Point of View

Authorship—the very idea of the individual point of view—is not a priority of the new technology...Instead of people being treated as the sources of their own creativity, commercial aggregation and abstraction sites present anonymized fragments of creativity…obscuring the true sources.

Again, this was 2010, way before “AI”.

Who cares for sources anymore? The perspective of the individual is obsolete. Everyone is flattened into a global mush. A word smoothie. We care more for the abstractions we can create on top of individual expression rather than the individuals and their expressions.

The central mistake of recent digital culture is to chop up a network of individuals so finely that you end up with a mush. You then start to care about the abstraction of the network more than the real people who are networked, even though the network by itself is meaningless. Only people were ever meaningful

While Lanier was talking about “the hive mind” of social networks as we understood it then, AI has a similar problem: we begin to care more about the training data than the individual humans whose outputs constitute the training data, even though the training data by itself is meaningless. Only people are meaningful.[1] As Lanier says in the book:

The bits don't mean anything without a cultured person to interpret them.

Information is alienated experience.

Emphasizing Artificial or Natural Intelligence

Emphasizing the crowd means deemphasizing individual humans.

I like that.

Here’s a corollary: emphasizing artificial intelligence means de-emphasizing natural intelligence.

Therein lies the tradeoff.

In Web 2.0, we emphasized the crowd over the individual and people behaved like a crowd instead of individuals, like a mob rather than a person. The design encouraged, even solicited, that kind of behavior.

Now with artificial intelligence enshrined, is it possible we begin to act like it? Hallucinating reality and making baseless claims in complete confidence will be normal, as that’s what the robots we interact with all day do.

What is communicated between people eventually becomes their truth. Relationships take on the troubles of software engineering.

What Even is “Intelligence”?

Before MIDI, a musical note was a bottomless idea that transcended absolute definition

But the digitalization of music require removing options and possibilities based on what was easiest to be represented and processed by the computer. We remove “the unfathomable penumbra of meaning that distinguishes” a musical note in the flesh to make a musical note in the computer.

Why? Because computers require abstractions. But abstractions are just that: models that roughly fit the real thing. But too often we let the abstractions become our reality:

Each layer of digital abstraction, no matter how well it is crafted, contributes some degree of error and obfuscation. No abstraction corresponds to reality perfectly. A lot of such layers become a system unto themselves, one that functions apart from the reality that is obscured far below.

Lanier argues it happened with MIDI and it happened with social networks, where people became rows in a database and began living up to that abstraction.

people are becoming like MIDI notes—overly defined, and restricted in practice to what can be represented in a computer...We have narrowed what we expect from the most commonplace forms of musical sound in order to make the technology adequate.

Perhaps similarly, intelligence (dare I say consciousness) was a bottomless idea that transcended definition. But we soon narrowed it down to fit our abstractions in the computer.

We are happy to enshrine into engineering designs mere hypotheses—and vague ones at that—about the hardest and most profound questions faced by science, as if we already posses perfect knowledge.

So we enshrine the idea of intelligence into our computing paradigm when we don’t even know what it means for ourselves. Are we making computers smarter or ourselves dumber?

You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart.

Prescient.