Link tags: computers

32

Your brain does not process information and it is not a computer | Aeon Essays

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

A holy communion | daverupert.com

You and I are partaking in something magical.

AI isn’t the app, it’s the UI - Stack Overflow Blog

In some ways, the fervor around AI is reminiscent of blockchain hype, which has steadily cooled since its 2021 peak. In almost all cases, blockchain technology serves no purpose but to make software slower, more difficult to fix, and a bigger target for scammers. AI isn’t nearly as frivolous—it has several novel use cases—but many are rightly wary of the resemblance. And there are concerns to be had; AI bears the deceptive appearance of a free lunch and, predictably, has non-obvious downsides that some founders and VCs will insist on learning the hard way.

This is a good level-headed overview of how generative language model tools work.

If something can be reduced to patterns, however elaborate they may be, AI can probably mimic it. That’s what AI does. That’s the whole story.

There’s very practical advice on deciding where and when these tools make sense:

The sweet spot for AI is a context where its choices are limited, transparent, and safe. We should be giving it an API, not an output box.

The Technium: Dreams are the Default for Intelligence

I feel like there’s a connection here between what Kevin Kelly is describing and what I wrote about guessing (though I think he might be conflating consciousness with intelligence).

This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.

Automate Mindfully | Jorge Arango

But a machine for writing isn’t the same as a machine that writes for you. A machine for viewing photos isn’t the same thing as a machine that travels in your stead. A machine for sketching isn’t the same thing as a machine that designs. I love doing these things and doing them more efficiently. But I have no desire to have them done for me. It’s a key distinction: Do not automate the work you are engaged in, only the materials.

Let’s Not Dumb Down the History of Computer Science | Opinion | Communications of the ACM

I don’t think I agree with Don Knuth’s argument here from a 2014 lecture, but I do like how he sets out his table:

Why do I, as a scientist, get so much out of reading the history of science? Let me count the ways:

  1. To understand the process of discovery—not so much what was discovered, but how it was discovered.
  2. To understand the process of failure.
  3. To celebrate the contributions of many cultures.
  4. Telling historical stories is the best way to teach.
  5. To learn how to cope with life.
  6. To become more familiar with the world, and to know how science fits into the overall history of mankind.

Top Secret Rosies

I need to seek out this documentary, Top Secret Rosies: The Female Computers of World War II.

It would pair nicely with another film, The Eniac Programmers Project

Hyperland, Intermedia, and the Web That Never Was — Are.na

In 1990, the science fiction writer Douglas Adams produced a “fantasy documentary” for the BBC called Hyperland. It’s a magnificent paleo-futuristic artifact, rich in sideways predictions about the technologies of tomorrow.

I remember coming across a repeating loop of this documentary playing in a dusty corner of a Smithsonian museum in Washington DC. Douglas Adams wasn’t credited but I recognised his voice.

Hyperland aired on the BBC a full year before the World Wide Web. It is a prophecy waylaid in time: the technology it predicts is not the Web. It’s what William Gibson might call a “stub,” evidence of a dead node in the timeline, a three-point turn where history took a pause and backed out before heading elsewhere.

Here, Claire L. Evans uses Adams’s documentary as an opening to dive into the history of hypertext starting with Bush’s Memex, Nelson’s Xanadu and Engelbart’s oNLine System. But then she describes some lesser-known hypertext systems

In 1985, the students at Brown who encountered Intermedia had never seen anything like it before in their lives. The system laid a world of information at their fingertips, saved them hours at the library, and helped them work through tangles of thought.

Beyond Smart Rocks

Claire L. Evans on computational slime molds and other forms of unconvential computing that look beyond silicon:

In moments of technological frustration, it helps to remember that a computer is basically a rock. That is its fundamental witchcraft, or ours: for all its processing power, the device that runs your life is just a complex arrangement of minerals animated by electricity and language. Smart rocks.

Living in Alan Turing’s Future | The New Yorker

Portrait of the genius as a young man.

It is fortifying to remember that the very idea of artificial intelligence was conceived by one of the more unquantifiably original minds of the twentieth century. It is hard to imagine a computer being able to do what Alan Turing did.

Chaos Design: Before the robots take our jobs, can we please get them to help us do some good work?

This is a great piece! It starts with a look back at some of the great minds of the nineteenth century: Herschel, Darwin, Babbage and Lovelace. Then it brings us, via JCR Licklider, to the present state of the web before looking ahead to what the future might bring.

So what will the life of an interface designer be like in the year 2120? or 2121 even? A nice round 300 years after Babbage first had the idea of calculations being executed by steam.

I think there are some missteps along the way (I certainly don’t think that inline styles—AKA CSS in JS—are necessarily a move forwards) but I love the idea of applying chaos engineering to web design:

Think of every characteristic of an interface you depend on to not ��fail’ for your design to ‘work.’ Now imagine if these services were randomly ‘failing’ constantly during your design process. How might we design differently? How would our workflows and priorities change?

Norbert Wiener’s Human Use of Human Beings is more relevant than ever.

What would Wiener think of the current human use of human beings? He would be amazed by the power of computers and the internet. He would be happy that the early neural nets in which he played a role have spawned powerful deep-learning systems that exhibit the perceptual ability he demanded of them—although he might not be impressed that one of the most prominent examples of such computerized Gestalt is the ability to recognize photos of kittens on the World Wide Web.

1969 & 70 - Bell Labs

PIctures of computers (of the human and machine varieties).

Two-Bit History

An ongoing timeline of computer technology in the form of blog posts by Sinclair Target (that’s a person, not a timeslipping transatlantic company merger).

Ways to think about machine learning — Benedict Evans

This strikes me as a sensible way of thinking about machine learning: it’s like when we got relational databases—suddenly we could do more, quicker, and easier …but it doesn’t require us to treat the technology like it’s magic.

An important parallel here is that though relational databases had economy of scale effects, there were limited network or ‘winner takes all’ effects. The database being used by company A doesn’t get better if company B buys the same database software from the same vendor: Safeway’s database doesn’t get better if Caterpillar buys the same one. Much the same actually applies to machine learning: machine learning is all about data, but data is highly specific to particular applications. More handwriting data will make a handwriting recognizer better, and more gas turbine data will make a system that predicts failures in gas turbines better, but the one doesn’t help with the other. Data isn’t fungible.

Manual Aspire

If only all documentation was as great as this old manual for the ZX Spectrum that Remy uncovered:

The manual is an instruction book on how to program the Spectrum. It’s a full book, with detailed directions and information on how the machine works, how the programming language works, includes human readable sentences explaining logic and even goes so far as touching on what hex values perform which assembly functions.

When we talk about things being “inspiring”, it’s rarely in regards to computer manuals. But, damn, if this isn’t inspiring!

This book stirs a passion inside of me that tells me that I can make something new from an existing thing. It reminds me of the 80s Lego boxes: unlike today’s Lego, the back of a Lego box would include pictures of creations that you could make with your Lego set. It didn’t include any instructions to do so, but it always made me think to myself: “I can make something more with these bricks”.

the bullet hole misconception

The transcript of a terrific talk on the humane use of technology.

Instead of using technology to replace people, we should use it to augment ourselves to do things that were previously impossible, to help us make our lives better. That is the sweet spot of our technology. We have to accept human behaviour the way it is, not the way we would wish it to be.

The world is not a desktop

This 1993 article by Mark Weiser is relevant to our world today.

Take intelligent agents. The idea, as near as I can tell, is that the ideal computer should be like a human being, only more obedient. Anything so insidiously appealing should immediately give pause. Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstanding, and ambiguity that they represent a desirable computer interface goal? Further, it takes a lot of time and attention to build and maintain a smoothly running team of people, even a pair of people. A computer I need to talk to, give commands to, or have a relationship with (much less be intimate with), is a computer that is too much the center of attention.

The Coming Software Apocalypse - The Atlantic

The title is pure clickbait, and the moral panic early in this article repeats the Toyota myth, but then it settles down into a fascinating examination of abstractions in programming. On the one hand, there’s the problem of the not enough abstraction: having to write in code is such a computer-centric way of building things. On the other hand, our world is filled with dangerously abstracted systems:

When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing.

So that’s a big problem.

Bret Victor, John Resig and Margaret Hamilton are featured. Doug Engelbart and J.C.R. Licklider aren’t mentioned but their spirits loom large.