Journal tags: prediction

12

Guessing

The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It’s well worth a listen.

Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.

We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.

As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.

But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There’s still a feedback loop, but cause and effect are flipped. First we predict or guess what’s happening, then we receive information. Rinse and repeat.

The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.

There’s a natural tendency for us to balk at this proposition because it doesn’t seem rational. The rational model would be to make informed calculations based on available data …like computers do.

Maybe that’s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.

Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.

The guess-making is not at all like what our brains do—large language models require enormous amounts of inputs before they can make a single guess—but still, this should be the breakthrough to be shouted from the rooftops: we’ve taught machines how to guess!

And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.

Using AI, we will guess who should get a mortgage.

Using AI, we will guess who should get hired.

Using AI, we will guess who should get a strict prison sentence.

Reframed like that, it’s easy to see why technologists want to bury the lede.

Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.

(This, by the way, is also true of immersive “virtual reality” environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)

Take search engines. They’re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn’t bode well for the long-term reputation of that service.

But what if this is an interface problem?

Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.

What if the more fanciful guesses were marked as such?

As it is, there’s a “temperature” control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?

I don’t know what that would look like. It could be typographic—some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design—phrases like “Perhaps…”, “Maybe…” or “It’s possible but unlikely that…”

I’m sure you’ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.

A little bit of programmed humility could go a long way.

Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.

Mind you, that only works if there is some way to distinguish between fact and fabrication. If there’s no way to tell how much guessing is happening, then that’s a major problem. If you can’t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.

I think there’s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There’s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.

I want a machine to tell me when it’s guessing. That won’t make me trust it less. Quite the opposite.

After all, to guess is human.

One morning in the future

I had a video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.

“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”

There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.

It also made me stop and think about how amazing it was that we were having the call in the first place. I remembered Arthur C. Clarke’s predictions from 1964:

I’m thinking of the incredible breakthrough which has been possible by developments in communications, particularly the transistor and, above all, the communications satellite.

These things will make possible a world in which we can be in instant contact with each other wherever we may be, where we can contact our friends anywhere on Earth even if we don’t know their actual physical location.

It will be possible in that age—perhaps only 50 years from now—for a man to conduct his business from Tahiti or Bali just as well as he could from London.

The casual sexism of assuming that it would be a “man” conducting business hasn’t aged well. And it’s not the communications satellite that enabled my video call, but old-fashioned undersea cables, many in the same locations as their telegraphic antecedents. But still; not bad, Arthur.

After my call, I caught up on some email. There was a new newsletter from Ariel who’s currently in Antarctica.

Just thinking about the fact that I know someone who’s in Antarctica—who sent me a postcard from Antarctica—gave me another rush of feeling like I was living in the future. As I started to read the contents of the latest newsletter, that feeling became even more specific. Doesn’t this sound exactly like something straight out of a late ’80s/early ’90s cyberpunk novel?

Four of my teammates head off hiking towards the mountains to dig holes in the soil in hopes of finding microscopic animals contained within them. I hang back near the survival bags with the remaining teammate and begin unfolding my drone to get a closer look at the glaciers. After filming the textures of the land and ice from multiple angles for 90 minutes, my batteries are spent, my hands are cold and my stomach is growling. I land the drone, fold it up into my bright yellow Pelican case, and pull out an expired granola bar to keep my hunger pangs at bay.

Today, the distant future

It’s a bit of a cliché to talk about living in the future. It’s also a bit pointless. After all, any moment after the big bang is a future when viewed from any point in time before it.

Still, it’s kind of fun when a sci-fi date rolls around. Like in 2015 when we reached the time depicted in Back To The Future 2, or in 2019 when we reached the time of Blade Runner.

In 2022 we are living in the future of web standards. Again, technically, we’re always living in the future of any past discussion of web standards, but this year is significant …in a very insignificant way.

It all goes back to 2008 and an interview with Hixie, editor of the HTML5 spec at the WHATWG at the time. In it, he mentioned the date 2022 as the milestone for having two completely interoperable implementations.

The far more important—and ambitious—date was 2012, when HTML5 was supposed to become a Candidate Recommendation, which is standards-speak for done’n’dusted.

But the mere mention of the year 2022 back in the year 2008 was too much for some people. Jeff Croft, for example, completely lost his shit (Jeff had a habit of posting angry rants and then denying that he was angry or ranty, but merely having a bit of fun).

The whole thing was a big misunderstanding and soon irrelevant: talk of 2022 was dropped from HTML5 discussions. But for a while there, it was fascinating to see web designers and developers contemplate a year that seemed ludicriously futuristic. Jeff wrote:

God knows where I’ll be in 13 years. Quite frankly, I’ll be pretty fucking disappointed in myself (and our entire industry) if I’m writing HTML in 13 years.

That always struck me as odd. If I thought like that, I’d wonder what the point would be in making anything on the web to begin with (bear in mind that both my own personal website and The Session are now entering their third decade of life).

I had a different reaction to Jeff, as I wrote in 2010:

Many web developers were disgusted that such a seemingly far-off date was even being mentioned. My reaction was the opposite. I began to pay attention to HTML5.

But Jeff was far from alone. Scott Gilbertson wrote an angry article on Webmonkey:

If you’re thinking that planning how the web will look and work 13 years from now is a little bit ridiculous, you’re not alone.

Even if your 2022 ronc-o-matic web-enabled toaster (It slices! It dices! It browses! It arouses!) does ship with Firefox v22.3, will HTML still be the dominant language of web? Given that no one can really answer that question, does it make sense to propose a standard so far in the future?

(I’m re-reading that article in the current version of Firefox: 95.0.2.)

Brian Veloso wrote on his site:

Two-thousand-twenty-two. That’s 14 years from now. Can any of us think that far? Wouldn’t our robot overlords, whether you welcome them or not, have taken over by then? Will the internet even matter then?

From the comments on Jeff’s post, there’s Corey Dutson:

2022: God knows what the Internet will look like at that point. Will we even have websites?

Dan Rubin, who has indeed successfully moved from web work to photography, wrote:

I certainly don’t intend to be doing “web work” by that time. I’m very curious to see where the web actually is in 14 years, though I can’t imagine that HTML5 will even get that far; it’ll all be obsolete before 2022.

Joshua Works made a prediction that’s worryingly close to reality:

I’ll be surprised if website-as-HTML is still the preferred method for moving around the tons of data we create, especially in the manner that could have been predicted in 2003 or even today. Hell, iPods will be over 20 years old by then and if everything’s not run as an iPhone App, then something went wrong.

Someone with the moniker Grand Caveman wrote:

In 2022 I’ll be 34, and hopefully the internet will be obsolete by then.

Perhaps the most level-headed observation came from Jonny Axelsson:

The world in 2022 will be pretty much like the world in 2009.

The world in 2009 is pretty much like 1996 which was pretty much like the world in 1983 which was pretty much like the world in 1970. Some changes are fairly sudden, others are slow, some are dramatic, others subtle, but as a whole “pretty much the same” covers it.

The Web in 2022 will not be dramatically different from the Web in 2009. It will be less hot and it will be less cool. The Web is a project, and as it succeeds it will fade out of our attention and into the background. We don’t care about things when they work.

Now that’s a sensible perspective!

So who else is looking forward to seeing what the World Wide Web is like in 2036?

I must remember to write a blog post then and link back to this one. I have no intention of trying to predict the future, but I’m willing to bet that hyperlinks will still be around in 14 years.

Speaking of long bets…

Ten down, one to go

The Long Now Foundation is dedicated to long-term thinking. I’ve been a member for quite a few years now …which, in the grand scheme of things, is not very long at all.

One of their projects is Long Bets. It sets out to tackle the problem that “there’s no tax on bullshit.” Here’s how it works: you make a prediction about something that will (or won’t happen) by a particular date. So far, so typical thought leadery. But then someone else can challenge your prediction. And here’s the crucial bit: you’ve both got to place your monies where your mouths are.

Ten years ago, I made a prediction on the Long Bets website. It’s kind of meta:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

I made the prediction on February 22nd, 2011 when my mind was preoccupied with digital preservation.

One year later I was on stage in Wellington, New Zealand, giving a talk called Of Time And The Network. I mentioned my prediction in the talk and said:

If anybody would like to take me up on that bet, you can put your money down.

Matt was also speaking at Webstock. When he gave his talk, he officially accepted my challenge.

So now it’s a bet. We both put $500 into the pot. If I win, the Bletchly Park Trust gets that money. If Matt wins, the money goes to The Internet Archive.

As I said in my original prediction:

I would love to be proven wrong.

That was ten years ago today. There’s just one more year to go until the pleasingly alliterative date of 2022-02-22 …or as the Long Now Foundation would write it, 02022-02-22 (gotta avoid that Y10K bug).

It is looking more and more likely that I will lose this bet. This pleases me.

Prediction

Arthur C. Clarke once said:

Trying to predict the future is a discouraging and hazardous occupation becaue the profit invariably falls into two stools. If his predictions sounded at all reasonable, you can be quite sure that in 20 or most 50 years, the progress of science and technology has made him seem ridiculously conservative. On the other hand, if by some miracle a prophet could describe the future exactly as it was going to take place, his predictions would sound so absurd, so far-fetched, that everybody would laugh him to scorn.

But I couldn’t resist responding to a recent request for augery. Eric asked An Event Apart speakers for their predictions for the coming year. The responses have been gathered together and published, although it’s in the form of a PDF for some reason.

Here’s what I wrote:

This is probably more of a hope than a prediction, but 2021 could be the year that the ponzi scheme of online tracking and surveillance begins to crumble. People are beginning to realize that it’s far too intrusive, that it just doesn’t work most of the time, and that good ol’-fashioned contextual advertising would be better. Right now, it feels similar to the moment before the sub-prime mortgage bubble collapsed (a comparison made in Tim Hwang’s recent book, Subprime Attention Crisis). Back then people thought “Well, these big banks must know what they’re doing,” just as people have thought, “Well, Facebook and Google must know what they’re doing”…but that confidence is crumbling, exposing the shaky stack of cards that props up behavioral advertising. This doesn’t mean that online advertising is coming to an end—far from it. I think we might see a golden age of relevant, content-driven advertising. Laws like Europe’s GDPR will play a part. Apple’s recent changes to highlight privacy-violating apps will play a part. Most of all, I think that people will play a part. They will be increasingly aware that there’s nothing inevitable about tracking and surveillance and that the web works better when it respects people’s right to privacy. The sea change might not happen in 2021 but it feels like the water is beginning to swell.

Still, predicting the future is a mug’s game with as much scientific rigour as astrology, reading tea leaves, or haruspicy.

Much like behavioural advertising.

2001 + 50

The first ten minutes of my talk at An Event Apart Seattle consisted of me geeking about science fiction. There was a point to it …I think. But I must admit it felt quite self-indulgent to ramble to a captive audience about some of my favourite works of speculative fiction.

The meta-narrative I was driving at was around the perils of prediction (and how that’s not really what science fiction is about). This is something that Arthur C. Clarke pointed out repeatedly, most famously in Hazards of Prophecy. Ironically, I used Clarke’s meisterwork of a collaboration with Stanley Kubrick as a rare example of a predictive piece of sci-fi with a good hit rate.

When I introduced 2001: A Space Odyssey in my talk, I mentioned that it was fifty years old (making it even more of a staggering achievement, considering that humans hadn’t even reached the moon at that point). What I didn’t realise at the time was that it was fifty years old to the day. The film was released in American cinemas on April 2nd, 1968; I was giving my talk on April 2nd, 2018.

Over on Wired.com, Stephen Wolfram has written about his own personal relationship with the film. It’s a wide-ranging piece, covering everything from the typography of 2001 (see also: Typeset In The Future) right through to the nature of intelligence and our place in the universe.

When it comes to the technology depicted on-screen, he makes the same point that I was driving at in my talk—that, despite some successful extrapolations, certain real-world advances were not only unpredicted, but perhaps unpredictable. The mobile phone; the collapse of the soviet union …these are real-world events that are conspicuous by their absence in other great works of sci-fi like William Gibson’s brilliant Neuromancer.

But in his Wired piece, Wolfram also points out some acts of prediction that were so accurate that we don’t even notice them.

Also interesting in 2001 is that the Picturephone is a push-button phone, with exactly the same numeric button layout as today (though without the * and # [“octothorp”]). Push-button phones actually already existed in 1968, although they were not yet widely deployed.

To use the Picturephone in 2001, one inserts a credit card. Credit cards had existed for a while even in 1968, though they were not terribly widely used. The idea of automatically reading credit cards (say, using a magnetic stripe) had actually been developed in 1960, but it didn’t become common until the 1980s.

I’ve watched 2001 many, many, many times and I’m always looking out for details of the world-building …but it never occurred to me that push-button numeric keypads or credit cards were examples of predictive extrapolation. As time goes on, more and more of these little touches will become unnoticeable and unremarkable.

On the space shuttle (or, perhaps better, space plane) the cabin looks very much like a modern airplane—which probably isn’t surprising, because things like Boeing 737s already existed in 1968. But in a correct (at least for now) modern touch, the seat backs have TVs—controlled, of course, by a row of buttons.

Now I want to watch 2001: A Space Odyssey again. If I’m really lucky, I might get to see a 70mm print in a cinema near me this year.

A minority report on artificial intelligence

Want to feel old? Steven Spielberg’s Minority Report was released fifteen years ago.

It casts a long shadow. For a decade after the film’s release, it was referenced at least once at every conference relating to human-computer interaction. Unsurprisingly, most of the focus has been on the technology in the film. The hardware and interfaces in Minority Report came out of a think tank assembled in pre-production. It provided plenty of fodder for technologists to mock and praise in subsequent years: gestural interfaces, autonomous cars, miniature drones, airpods, ubiquitous advertising and surveillance.

At the time of the film’s release, a lot of the discussion centred on picking apart the plot. The discussions had the same tone of time-travel paradoxes, the kind thrown up by films like Looper and Interstellar. But Minority Report isn’t a film about time travel, it’s a film about prediction.

Or rather, the plot is about prediction. The film—like so many great works of cinema—is about seeing. It’s packed with images of eyes, visions, fragments, and reflections.

The theme of prediction was rarely referenced by technologists in the subsequent years. After all, that aspect of the story—as opposed to the gadgets, gizmos, and interfaces—was one rooted in a fantastical conceit; the idea of people with precognitive abilities.

But if you replace that human element with machines, the central conceit starts to look all too plausible. It’s suggested right there in the film:

It helps not to think of them as human.

To which the response is:

No, they’re so much more than that.

Suppose that Agatha, Arthur, and Dashiell weren’t people in a floatation tank, but banks of servers packed with neural nets: the kinds of machines that are already making predictions on trading stocks and shares, traffic flows, mortgage applications …and, yes, crime.

Precogs are pattern recognition filters, that’s all.

Rewatching Minority Report now, it holds up very well indeed. Apart from the misstep of the final ten minutes, it’s a fast-paced twisty noir thriller. For all the attention to detail in its world-building and technology, the idea that may yet prove to be most prescient is the concept of Precrime, introduced in the original Philip K. Dick short story, The Minority Report.

Minority Report works today as a commentary on Artificial Intelligence …which is ironic given that Spielberg directed a film one year earlier ostensibly about A.I.. In truth, that film has little to say about technology …but much to say about humanity.

Like Minority Report, A.I. was very loosely based on an existing short story: Super-Toys Last All Summer Long by Brian Aldiss. It’s a perfectly-crafted short story that is deeply, almost unbearably, sad.

When I had the great privilege of interviewing Brian Aldiss, I tried to convey how much the story affected me.

Jeremy: …the short story is so sad, there’s such an incredible sadness to it that…

Brian: Well it’s psychological, that’s why. But I didn’t think it works as a movie; sadly, I have to say.

At the time of its release, the general consensus was that A.I. was a mess. It’s true. The film is a mess, but I think that, like Minority Report, it’s worth revisiting.

Watching now, A.I. feels like a horror film to me. The horror comes not—as we first suspect—from the artificial intelligence. The horror comes from the humans. I don’t mean the cruelty of the flesh fairs. I’m talking about the cruelty of Monica, who activates David’s unconditional love only to reject it (watching now, both scenes—the activation and the rejection—are equally horrific). Then there’s the cruelty of the people of who created an artificial person capable of deep, never-ending love, without considering the implications.

There is no robot uprising in the film. The machines want only to fulfil their purpose. But by the end of the film, the human race is gone and the descendants of the machines remain. Based on the conduct of humanity that we’re shown, it’s hard to mourn our species’ extinction. For a film that was panned for being overly sentimental, it is a thoroughly bleak assessment of what makes us human.

The question of what makes us human underpins A.I., Minority Report, and the short stories that spawned them. With distance, it gets easier to brush aside the technological trappings and see the bigger questions beneath. As Al Robertson writes, it’s about leaving the future behind:

SF’s most enduring works don’t live on because they accurately predict tomorrow. In fact, technologically speaking they’re very often wrong about it. They stay readable because they think about what change does to people and how we cope with it.

Long betting

It has been exactly six years to the day since I instantiated this prediction:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

It is exactly five years to the day until the prediction condition resolves to a Boolean true or false.

If it resolves to true, The Bletchly Park Trust will receive $1000.

If it resolves to false, The Internet Archive will receive $1000.

Much as I would like Bletchley Park to get the cash, I’m hoping to lose this bet. I don’t want my pessimism about URL longevity to be rewarded.

So, to recap, the bet was placed on

02011-02-22

It is currently

02017-02-22

And the bet times out on

02022-02-22.

The long prep

The secret to a good war movie is not in the depiction of battle, but in the depiction of the preparation for battle. Whether the fight will be for Agincourt, Rourke’s Drift, Helm’s Deep or Hoth, it’s the build-up that draws you in and makes you care about the outcome of the upcoming struggle.

That’s what 2011 has felt like for me so far. I’m about to embark on a series of presentations and workshops in far-flung locations, and I’ve spent the first seven weeks of the year donning my armour and sharpening my rhetorical sword (so to speak). I’ll be talking about HTML5, responsive design, cultural preservation and one web; subjects that are firmly connected in my mind.

It all kicks off in Belgium. I’ll be taking a train that will go under the sea to get me to Ghent, location of the Phare conference. There I’ll be giving a talk called All Our Yesterdays.

This will be non-technical talk, and I’ve been given carte blanche to get as high-falutin’ and pretentious as I like …though I don’t think it’ll be on quite the same level as my magnum opus from dConstruct 2008, The System Of The World.

Having spent the past month researching and preparing this talk, I’m looking forward to delivering it to a captive audience. I submitted the talk for consideration to South by Southwest also, but it was rejected so the presentation in Ghent will be a one-off. The SXSW rejection may have been because I didn’t whore myself out on Twitter asking for votes, or it may have been because I didn’t title the talk All Our Yesterdays: Ten Ways to Market Your Social Media App Through Digital Preservation.

Talking about the digital memory hole and the fragility of URLs is a permanently-relevant topic, but it seems particularly pertinent given the recent moves by the BBC. But I don’t want to just focus on what’s happening right now—I want to offer a long-zoom perspective on the web’s potential as a long-term storage medium.

To that end, I’ve put my money where my mouth is—$50 worth so far—and placed the following prediction on the Long Bets website:

The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.

If you have faith in the Long Now foundation’s commitment to its URLs, you can challenge my prediction. We shall then agree the terms of the bet. Then, on February 22nd 2022, the charity nominated by the winner will receive the winnings. The minimum bet is $200.

If I win, it will be a pyrrhic victory, confirming my pessimistic assessment.

If I lose, my faith in the potential longevity of URLs will be somewhat restored.

Depending on whether you see the glass as half full or half empty, this means I’m either entering a win/win or lose/lose situation.

Care to place a wager?

Tomorrow and tomorrow and tomorrow

Past visions of the future

This 1969 vision of a future household is remarkably prescient in some ways but unfortunately dated when it comes to gender roles:

What the wife selects on her console will be paid for by the husband on his counterpart console.

The Internet in 1969

By 1993, AT&T were showing a more equally balanced vision of the future.

AT&T 1993 “You Will” Ads

Given that those ads are a mere sixteen years old, it’s hardly surprising that they’re generally pretty accurate (although mobile phones are conspicuous by their absence).

Paul Saffo said we routinely overestimate short-term change and underestimate long-term change, but it’s those long-term predictions that are the most entertaining and fascinating.

Paleo Future is a blog by Matt Novak dedicated to A look into the future that never was. The archive is structured by decade, going back to the 1880s. The site is an orgasmofest of steampunk, retro-journalistic soothsaying and zeppelin-inspired musical theatre (calm down, Simon).

Meanwhile, looking in the other direction of the light cone, why has it taken me so long to discover Near Future Laboratory? Julian Bleecker and friends seek out design and cultural trends, often viewed through the lens of science fiction (see, for example, Design Fiction Chronicles: The Stability of Food Futures).

In some ways, science fiction is the safe route to future prediction. Paul Saffo again:

Wild cards sensitize us to surprise, and they push the edges of the cone out further. You can call weird imaginings a wild card and not be ridiculed. Science fiction is brilliant at this, and often predictive, because it plants idea bombs in teenagers which they make real 15 years later.

The expectations set by science fiction result in the hipster chic of wearing a T-shirt emblazoned in Helvetica with where’s my jetpack?

Ray Bradbury takes another tack:

People ask me to predict the future, when all I want to do is prevent it. Better yet, build it. Predicting the future is much too easy, anyway. You look at the people around you, the street you stand on, the visible air you breathe, and predict more of the same. To hell with more. I want better.

Science fiction doesn’t just show us the future we hope for further down the light cone; it also shows us the design and culture we want to prevent.

Brazil (Terry Gilliam, 1985) - Ministry of Information

The Audio of the System of the World

Four months after the curtain went down on dConstruct 2008, the final episode of the podcast of the conference has just been published. It’s the audio recording of my talk The System Of The World.

I’m very happy indeed with how the talk turned out: dense and pretentious …but in a good way, I hope. It’s certainly my favourite from the presentations I have hitherto delivered.

Feel free to:

The whole thing is licenced under a Creative Commons attribution licence. You are free—nay, encouraged—to share, copy, distribute, remix and mash up any of those files as long as you include a little attribution lovin’.

If you’ve got a Huffduffer account, feel free to huffduff it.