• The Future, Present, and Past of News

    Eleventh in the News Commons series.

    all experience is an arch wherethro’
    Gleams that untravell’d world whose margin fades
    For ever and forever when I move.
    Alfred, Lord Tennyson, in Ulysses

    News flows. It starts with what’s coming up, goes through what’s happening, and ends up as what’s kept—if it’s lucky.

    Facts take the same route. But, since lots of facts don’t fit stories about what’s happening, they aren’t always kept, even if they will prove useful in the future. (For more on that, see Stories vs. Facts.)

    But we need to keep both stories and facts, and not just for journalists. Researchers and decision-makers of all kinds need all they can get of both.

    That’s why a news commons needs to take care of everything from what’s coming up through what’s happened, plus all the relevant facts, whether or not they’ve shown up in published stories. We won’t get deep, wide, or whole news if we don’t facilitate the whole flow of news and facts from the future to the past.

    Let’s call this the Tennyson model, after Lord Alfred’s Ulysses, excerpted above. In this model, the future is a calendar such as the one in DatePress. The present is is news reporting. The past is archives.

    Calendars are easy to make. They are also easy to feed into other calendars. For example, take the Basic Government Calendar, of Bloomington, Indiana. That one is built from 50+ other calendars (to which it subscribes—and so can you). The Big Big Calendar (be patient: it takes a while to load) covers many other journalistic beats besides government (the beat of the B Square Bulletin, which publishes both).

    We describe approaches to archives in The Online Local Chronicle and Archives as Commons. Here in Bloomington, we have two examples already with BloomDocs.org and The Bloomington Chronicle. Both are by Dave Askins of the B Square Bulletin using open-source code. And both are new.

    Relatively speaking, Bloomington is a news oasis (see our list of media life forms in Wide News) in a world where news deserts are spreading. So we’ve got a lot to work with. If you want to help with any of it, let me know.

     

  • The Personal Internet

    —is not this:

    And for too many of them, password reset is the new login.

    By now we take it for granted.

    To live your digital life on the Internet, you need accounts. Lots of them.

    Everything on the Internet that requires an account has a lock on you—for their convenience. They don’t know any other way. That’s because all the services we use in the online world operate inside a model called client-server, which looks like this:

    This is great for the server operator, but not for the rest of us:

    This was not the promise of the Internet described by John Perry Barlow in Death From Above in 1995 and A Declaration of the Independence of Cyberspace in 1996. Nor was it what Chris Locke, Rick Levine, David Weinberger, and I said in The Cluetain Manifesto‘s ‘one clue’—

    —and in the 95 Theses that followed, starting with these:

    Since I’ve often heard that an unintended consequence of Cluetrain was causing social media in some way, I just asked ChatGPT and Perplexity the same question: Did The Cluetrain Manifesto help start social media? The answers were roughly the same, but Perplexity provides sources, so I’ll quote that answer:

    The Cluetrain Manifesto did not directly start social media, but it did play a significant role in predicting and shaping the philosophy behind social media marketing and online communication. Published in 1999, the manifesto anticipated many of the changes that would occur with the rise of social media and online conversations. (link) (link)

    Key points about The Cluetrain Manifesto’s influence on social media:

    1. Predicted global conversations: The manifesto accurately foresaw that the internet would enable powerful global conversations, allowing people to share knowledge rapidly (link).
    2. Emphasized authentic communication: It stressed the importance of companies engaging in genuine, human conversations with customers, which became a cornerstone of social media marketing (link) (link).
    3. Highlighted networked markets: The manifesto recognized that customers were becoming more connected and influential through online networks, a concept that social media platforms later embodied (link).
    4. Challenged traditional marketing: It criticized the “homogenized voice” of traditional business communication, encouraging a more authentic and conversational approach that social media later facilitated (link).
    5. Empowered employees: The manifesto suggested that companies should empower employees to engage with customers, an idea that became crucial in social media strategies (link).
    6. Recognized the shift in power: It predicted that customers would gain more influence over companies through online conversations, which social media platforms later enabled (link).

    While The Cluetrain Manifesto didn’t create social media technologies, it provided a philosophical foundation that aligned closely with how social media would eventually function. Its ideas about authentic communication, networked conversations, and customer empowerment became central to social media marketing strategies and practices (link) (link) (link).

    Well, fine. But look again at the “one clue” that started it all.

    We are all still seats, eyeballs, end users, and consumers. Our reach still does not exceed corporate, government, and other graspy entities, because all of them operate their services on the client-server model. And we have not given them something to deal with.

    Also, so long as nearly everyone believes that client-server is the only way to go, the fact that the Internet is end-to-end beneath everything done on it remains irrelevant. Nothing in any of these (and many other) efforts before and since has done anything to change the damned Internet of Accounts:

    I do, however, see hope coming from three angles.

    First is self-sovereign identity, or SSI. I’ve written about SSI in many ways and places over the years, but perhaps the best is New Hope for Digital Identity, published in Linux Journal back in 2017. What SSI wishes to replace is the familiar client-server model in which you are the account holder, and two servers are the “identity provider” and the “relying party.” With this system, your “ID” is what you get from the identity provider and their server. With SSI, you have a collection of verifiable credentials issued by the DMV, your church, your school, a performance venue, whatever. They get verified by an independent party in a trustworthy way. You’re not just a client or just an account holder. You disclose no more than what’s required, on an as-needed basis.

    Second is contract. Specifically, terms we proffer as first parties and the sites and services of the world agree to as second parties. Guiding the deployment of those is IEEE P7012 Standard for Machine Readable Personal Privacy Terms, which I’ve called the most important standard in development today. I’m the chair of the P7012 working group, which has been on the case since 2017. The standard is now drafted and moving though the IEEE’s approval mill. If all goes well, it will be ready early next year. It works like this:

    Possible agreements sit at a public website. Customer Commons was created for this purpose, and to do for personal contracts what Creative Commons does for personal copyrights. The person’s agent, such as a browser, acting as the first party, tells the second party (an entity of any kind) what agreement the person has chosen from a small roster of them (again, on the Creative Commons model). The entity either agrees or declines. If the two agree, the decision is recorded identically by both parties. If the entity declines, that decision is also recorded on the person’s side.

    Customer Commons has one such agreement already, called P2B1 (beta), or #NoStalking. As with all contracts, there’s something in it for both parties. With #NoStalking, the person isn’t tracked away from the site or service, and the site or service still gets to advertise to the person. Customer Commons (for which I am a founder and board member) plans to have a full list of agreements ready before the end of this year. If this system works, it will replace the Internet of Accounts with something that works far better for everyone. It will also put the brakes on uninvited surveillance, big time.

    Third is personal AI. This is easy to imagine if you have your own AI working on your side. It can know what kind of agreement you prefer to proffer to different kinds of sites and services. It can also help remember all the agreements that have been made already, and save you time and energy in other ways. AI on the entities’ sides can also be involved. Imagine two robot lawyers shaking hands and you can see where this might go.

    There are a variety of personal (not just personalized) AI efforts out there. The one I favor, because it’s open source and inspired by The Intention Economy, is Kwaai.ai, a nonprofit community of volunteers where I also serve as chief intention officer.

    I welcome your thoughts. Also your work toward replacing the Internet of Accounts with the Internet of People—plus every other entity that welcomes full personal agency.

  • A very local storm

    It was a derecho, or something like one. The gust front you see in the third image here

    Derecho. (2024, June 24). In Wikipedia. https://en.wikipedia.org/wiki/Derecho

    —looks a lot like the storm front in the top image above (via Weatherbug, storm tracker view). I’d experienced one twelve years ago, in Arlington, Mass. It felt like a two minute hurricane, and when it was over, hundreds of trees were down. This one wasn’t as bad, but TwitteX seemed to agree that it was a derecho. And we did have many broken trees and power outages. Here’s one example of the former:

    That’s half a huge silver maple. Very luckily, it missed the house and only trashed the metal fence. Pretty house, too.

    There was also a lot of lightning as well. Dig:

    You can barely see the blue dot in the middle, but that’s where we live. One of those red dots is about a hundred feet from where I’m writing this.

    If you’re into this kind of stuff, I recommend the My Lightning Finder app, which produced the above. Also LightningMaps.org on the Web. That one shows thunder as gray circles expanding and fading at the speed of sound moving outward from a lightning dot. Of course, lots of those lightning dots are lines in clouds, or zig-zags between ground and sky. They aren’t all “strikes.”

    But when lightning does strike, one of my favorite storm sounds is a loud crack, then “Cccchhhheeeeeooooowwwwwww” before a BOOM of thunder slams the ground. What you’re hearing after the crack is sound coming off the length of the lightning strike, starting at the ground and moving up to the cloud above. The volume of the sound and its pitch go down as they originate from farther and farther away along the length of the lightning bolt itself. The BOOM is produced by the top of the bolt, which typically fans out inside the cloud, parallel to the ground, in a shape that looks something like the letter T, with the top crossbar parallel to the ground. The BOOM comes at you from the top of the bolt. Listen for the whole show the next time you’re in a storm and lightning strikes nearby.

    [Later, 1:00 am on 27 June…] Much of the city is still without power. Also, this photo on Reddit, shot from the west side Aldi, shows the kind of rain that produces a derecho.

    Also, from the Monroe County Emergency Management Storm Update 1 p.m. on 6/27/24:

    More than 100 damage reports have been collected by Indiana 211 following Tuesday’s storm event in Monroe County. The National Weather Service confirmed straight-line winds of 65-75 mph damaged homes and property in Monroe, Owen, Vigo and Clay counties. All four counties have declared, or plan to declare, local disasters due to the incident. No reports of significant injuries have been received at this point. Impacted residents in these four counties are asked to call 211 to report damage or visit the website at https://in211.org.
    Damage
    Most of these reports involve overturned trees into homes, some with significant damage. Only a handful of residents reported their homes to be uninhabitable. The local Salvation Army reported only two residents utilized the shelter established at their location, 111 N. Rogers St… As of 1 p.m. Thursday, approximately 28,000 power outages were reported across Indiana, with nearly 16,000 of those reported in Monroe County.

    Thanks to Dave Askins of the B Square Bulletin for that one.

  • Does personal AI require Big Compute?

    I don’t think it does. Not for everything.

    We already have personal AI for autocomplete. Do we need Big Compute for a personal AI to tell us which pieces within our Amazon orders are in which line items in our Visa statements? (Different items in a shipment often appear inside different charges on a card.) Do we need Big Compute to tell us who we had lunch with, and where, three Fridays ago? Or to give us an itemized list of all the conferences we attended in the last five years? Or what tunes or podcasts we’ve played (or heard) in the last two months (for purposes such as this one)?

    Let’s say we want a list of all the books on our shelves using something like OpenCV to detect text in natural scene images using the EAST text detector? Or to use the same kind of advanced pattern recognition to catalog everything we can point a phone camera at in our homes? Even if we need to hire models from elsewhere to help us out, onboard compute should be able to do a lot of it, and to keep our personal data private.

    Right now your new TV is reporting what you watch back to parties unknown. Your new car is doing the same. Hell, so is your phone. What if you had all that data? Won’t you be able to do more with it than the spies and their corporate customers can?

    It might be handy to know all the movies you’ve seen and series you’ve binged on your TV and other devices—including, say, the ones you’ve watched on a plane. And to remember when it was you drove to that specialty store in some other city, what the name of it was, and what was the good place you stopped for lunch on the way.

    This data should be yours first—and alone—and shared with others at your discretion. You should be able to do a lot more with information gathered about you than those other parties can—and personal AI should be able to help you do it without relying on Big Compute (beyond having its owners give you back whatever got collected about you).

    At this early stage in the evolution of AI, our conceptual frame for AI is almost entirely a Big Compute one. We need much more thinking out loud about what personal AI can do. I’m sure the sum of it will end up being a lot larger than what we’re getting now from Big AI.

     

     

     

  • Jayson Tatić and the Boston Celtićs

    This is the best illustration I could get out of ChatGPT 4o. It’ll do until I have it get a better one.

    Nobody’s talking about this, so I will: Jayson Tatum is playing a decoy. More to the point, he is playing Jokić, Dončić, or a bit of both. Not all the time (such as when he’s doing one of those step-back threes with lots of time on the clock, but enough). So let’s call him Jayson Tatić.

    Because on offense he’s pulling in double and triple teams and passing expertly to open men. Over and over again. And the passes turn into assists because he is connected to those men. That’s the way the Boston Celtićs work under Joe Mazzula. Connection is everything. They are a team of fully capable all-stars, each willing to give up their own ego and stats for the sake of the team.

    So, while the talking heads and talking ‘casts go on about how poor Tatum’s offense seems to be, they miss the misdirection. They assume Jayson Tatum is always wanting to play hero ball, because he can, and because that’s they guy he is. They don’t get that he’s really Jayson Tatić’, and his feint is that he’s always going to shoot, that he’s always going to post up and go one-on-two or one-on-few. Meanwhile, what he’s really doing is pulling in a defense that gives him open men, all of whom he knows, because he’s connected to them psychically, audibly (they talk!) and manually. He is always working to pass, which he does expertly.

    Yeah, he turns it over sometimes. So what. He gets assists because he’s a one-man wrecking crew of misdirection, especially when he gets downhill. And the man can pass.

    When this series is over, and Boston takes it 4 to 3, 2, 1, or 0, and Jaylen Brown or Jrue Holiday get the MVP (like Andre Iguodala got the MVP a few years back), the Celtics’ success will owe in no small way to Jayson’s teamwork.

    There’s a game tonight, so watch for it.

    [Later…June 18, 2024…] Well, the Celtics got crushed almost immediately after I wrote the line above. But that didn’t matter, because in the next game, which happened last night in Boston, the Celtics crushed the Mavs and won the team’s record 18th championship. Tatum had 11 assists. And he might have had more if he and Jaylen Brown weren’t staying in a won game to pad their stats toward a possible MVP award. (That’s how it looked to me anyway.)  Brown had eight assists and won the MVP (by a 4-3 margin, it turned out), deservedly because the award was for the whole series. What mattered more, though, was that the best scorer on the team dished the ball a lot.

    A final word: as a Knicks fan from way back, it was painful to see Julius Randle on the ABC panel of talkers after the game. The man was obviously not comfortable in that role. But he did get a chance to say one of his jobs next season will be stopping the Celtics. Clearly the Knicks and the Celtics are now the cream of the East. Next season will be good for defense-loving fans of East Coast basketball.

  • Archiving a Way

    These are the men who strung and assembled the cables that hold up the George Washington Bridge roadway. After this was shot, on July 25, 1929, the cable bundles were compressed, sheathed, and draped with suspension cables to the new roadway that would be built below over the next few months. The photo is from the collection of Allen H. Searls, the gap-toothed guy with a big grin at the center of the photo. He was 21 at the time.

    My father, Allen H. Searls, was an archivist. Not a formal one, but good in the vernacular, at least when it came to one of the most consequential things he did in his life: helping build the George Washington Bridge. He did this by photographing his work and fellow workers. He shot with a Kodak camera, and then developed and printed each shot, and assembled them into an album that survives to this day. All the shots in this collection are from that album. I’ve archived them in my Flickr site focused on infrastructure. I have also Creative Commons licensed them to require only attribution. (Though I’d rather people credit Allen H. Searls than his son.)

    Only two of the photos are dated. One is July 25, 1929, when workers and families celebrated the completion of the cable stringing operation. The other is July 25, 1930, presumably when the roadway was completed. I was able to piece together the significance of these dates, and guess at the date ranges of other pictures, by doing a deep dive into the New York Times archive (where I found that these guys were called “bridgemen”), and by reading my father’s copy of the Roebling Cable company’s book about how the bridge’s cables were made and put in place.

    As we know now, almost too well, we live in an Age of AI, when the entire corpus of the Internet, and God only knows what else, has been ingested into large language models that are trained and programmed to answer questions about what they “know” (even though they don’t, really). Meanwhile what do we, as human beings, actually know? Or, better yet, where can we find what we need or want to know? Libraries of the physical kind are necessary but insufficient when our instruments of inquiry are entirely electronic. The World Wide Web has turned into the World Wide Whiteboard.

    We need electronic archives. Simple as that.

    We all know (and, I hope, appreciate) the Internet Archive. I was going to give my father’s copy of the Roebling book to the Archive for scanning, but discovered it’s already there! That made the book easy to read (here on this flat screen), and saved my own fragile copy from further damage.

    A heroic example of one person’s work to save the whole corpus of literature on broadcasting is David Gleason’s World Radio History site, without which much knowledge of broadcasting history would be lost.

    On the contemporary front, Dave Askins‘ work toward online local news chronicles may prove essential if we don’t want to see current and future news disappear down the same holes into which too many past publications’ work has fallen (or remains alive but trapped behind paywalls).

    For more on where this might go, see my Archives as Commons post. I’ll be talking about this, I hope, at the upcoming WoW Conference and DWeb Camp.

     

  • A Fun AI Fail

    Here is me, trying to get ChatGPT (version 4o, which I pay for) to give me an illustration to use in my last post here, titled The People’s AI. But don’t go there yet (if you haven’t already). What I ended up using there is a punchline at the end of the dialog that starts here—

    Me: A vast field on which the ground spells the letters A and I, with people on it, having a good time.

    ChatGPT:

    Here’s the illustration of a whimsical outdoor scene with a vast field shaped into the letters ‘A’ and ‘I’, where people are enjoying various activities, creating a cheerful and lively atmosphere.

    Me: Make the two letters upper case A and upper case I, both with serifs.

    ChatGPT: 

    Here’s the updated illustration of a vast field shaped into the uppercase letters ‘A’ and ‘I’, both with serifs, where people are enjoying a sunny day outdoors. The setting is lively and festive, enhancing the creative landscape.

    Me: Those are not the letters A or I, and lack serifs. Try again, with A on the left, and I on the right.

    ChatGPT: 

    Here’s the updated illustration showing a vast field shaped into the uppercase serif letters ‘A’ and ‘I’, with ‘A’ on the left and ‘I’ on the right. The field is vibrant and bustling with people enjoying various outdoor activities.

    Me: Better, but there is no I on the right.

    ChatGPT:

    Here’s the updated illustration showing a vast field clearly shaped into the uppercase serif letters ‘A’ and ‘I’, with ‘A’ on the left and ‘I’ on the right. The field is lively with people engaging in various activities, set in a sunny and cheerful environment.

    Me: That’s an interesting design, but I need an A and an I next to each other.

    ChatGPT:

    Here’s the updated illustration showing the vast field clearly shaped into the adjacent uppercase serif letters ‘A’ and ‘I’. The setting is vibrant and festive, surrounded by people enjoying various activities.

    Me: That’s one big A, not an A and an I. Give me both.

    ChatGPT:

    Here’s the updated illustration showing a vast field clearly shaped into the uppercase serif letters ‘A’ and ‘I’, side by side. The setting is lively and festive, filled with people enjoying various activities.

    Me: That has no serifs and is still just an A. I want an A and an I. Separate.

    ChatGPT:

    Here’s the updated illustration showing a vast field clearly shaped into the uppercase serif letters ‘A’ and ‘I’, side by side. The setting is vibrant, with people enjoying various outdoor activities.

    Me: Still not separate. That’s one big A. I want an I next to it.

    ChatGPT:

    Here’s the updated illustration, clearly showing the vast field shaped into the separate uppercase serif letters ‘A’ on the left and ‘I’ on the right. The scene is lively with people enjoying various activities under a sunny sky.

    Me: Let’s try this: An “A” and an “I” in an open field full of people.

    ChatGPT: 

    Here’s the illustration showing an open field shaped into the uppercase letters ‘A’ and ‘I’, with people enjoying various activities in the area. The scene is lively and sunny, capturing a cheerful outdoor gathering.

    Me: Better! Now give me the same thing, with a complete “a.”

    ChatGPT:  We are experiencing heavy server load. To ensure the best experience for everyone, we have rate limits in place. Please wait for 2 minutes before generating more images. Let me know if there’s anything else you’d like to discuss in the meantime!

    Me (after a 3 minute pause): Thanks. Now give me the same image, but with a complete “A.”

    ChatGPT: 

    Here’s the updated illustration showing a vast open field creatively landscaped into two distinct uppercase ‘A’ letters, bustling with people enjoying various activities in a sunny, cheerful setting.

    (Yes, it created both images.)

    Me: Give me the same, but with people surrounding the “A” and the “I.” Also, please put serifs on the “I.”

    ChatGPT:

    Here’s the updated illustration showing a vast open field creatively landscaped into the shapes of the uppercase letters ‘A’ and ‘I’, both with serifs. The field is surrounded by people enjoying various activities in a sunny, cheerful setting.

    Me: Try the same, but with “a” and “i” as separate letters.

    ChatGPT: 

    Here’s the updated illustration showing a vast open field creatively landscaped into the shapes of the lowercase letters ‘a’ and ‘i’. The setting is surrounded by people enjoying various activities in a sunny, cheerful atmosphere.

    Me: Closer. Only now take it out of the city. I want only countryside.

    ChatGPT:

    Here’s the updated illustration showing a vast open field in the countryside, creatively landscaped into the shapes of the lowercase letters ‘a’ and ‘i’. The setting is serene and picturesque, surrounded by people enjoying rural activities.

    Me: Please make the “a” and the “i” separate.

    ChatGPT:

    Here’s the updated illustration showing a vast open field in the countryside, creatively landscaped into the distinctly separated lowercase letters ‘a’ and ‘i’. The setting is serene, surrounded by people enjoying rural activities in a picturesque and peaceful outdoor gathering.

    At this point I gave up and decided to give Microsoft Copilot a whack—

    Me: A vast field on which the ground spells the letters A and I, with people on it, having a good time.

    Microsoft Copilot:

    I decided not to push my luck, and used the one on the bottom right.

    It’s fun to know I am not alone at this. Here is a brilliant video of somebody far more patient than I getting ChatGPT to spell HONDA in ASCII art.

    Early days, folks. Early days.

     

  • The People’s AI

    Prompt: “A vast field on which the ground spells the letters A and I, with people on it, having a good time.” Via Copilot | Designer

    People need their own AIs. Personally and collectively.

    We won’t get them from Anthropic, Apple, Google, OpenAI, Meta, or Microsoft. Not even from Apple.

    All those companies will want to provide AIaaS: AI as a Service, rather than AI that’s yours alone. Or ours, collectively.

    The People’s AI can only come from people. Since it will be made of code, it will come from open-source developers working for all of us, and not just for their employers—even if those employers are companies listed above.*

    That’s how we got Linux, Apache, MySQL, Python, and countless other open-source code bases on which the digital world is now built from the ground up. Our common ground is open-source code, standards, and protocols.

    The sum of business that happens atop that common ground is incalculably vast. It also owes to what we first started calling because effects twenty years ago at Bloggercon. That was when people were making a lot more money because of blogging than with blogging.

    Right after that it also became clear that most of the money being made in the whole tech world was because of open-source code, standards, and protocols, rather than with them. (I wrote more about it here, here, and here.)

    So, thanks to because effects, the most leveraged investments anyone can make today will be in developing open source code for The People’s AI.

    That’s the AI each of us will have for our own, and that we can use both by ourselves and together as communities.

    Those because investments will pay off on the with side as lavishly as investments in TCP/IP, HTTP, Linux, and countless other open-source efforts have delivered across the last three decades.

    Only now they’ll pay off a lot faster. For all of us.


    *See what I wrote for Linux Journal in 2006 about how IBM got clueful about paying kernel developers to work for the whole world and not just one company.

  • Blog + Newsletter

    Newsletters are all the rage now. In recognition of that, I blogged here two years ago about the idea of writing a solo newsletter. Since then I’ve been co-producing this one with Katherine Druckman at Reality 2.o. It’s a Substack one, so I know how that game works on the production as well as the consumption ends.

    Recently I also learned that WordPress makes it easy to turn blogs into newsletters, which is why you see “Get New Posts by Email” in the right column here. So, wanting an image to go with the news that this blog can now be a newsletter if you like, I said to ChatGPT, “Draw me Doc Searls blogging on the left side of the image, and producing a newsletter on the right side.” It gave me this, including the caption:

    Here’s the illustration depicting Doc Searls engaging in two different activities. On the left, he is blogging, and on the right, he is producing a newsletter. This split scene captures his multitasking abilities in digital content creation.

    Except for the slouch, I never looked like that. But what the hell. AI is wacky shit, so there ya go.

    Feel free to subscribe.

  • Personal vs. Personalized AI

    There is a war going on. Humanity and nature are on one side and Big Tech is on the other. The two sides are not opposed. They are orthogonal. The human side is horizontal and the Big Tech side is vertical.*

    The human side is personal, social, self-governed, heterarchical, open, and grounded in the physical world. Its model is nature, and the cooperative contexts in which competition, creation, and destruction happen in the natural world.

    The Big Tech side is corporate, industrial, hierarchical, competitive, mechanistic, extractive, and closed, even though it produces many positive-sum products and services that are good for people and good for nature. It is also, being competitive and rewarding toward winner-take-most outcomes, dominated by giants.

    This war has been fought over many other things in the past, especially in tech. But AI is the big one right now—and perhaps the biggest one of all time.

    Over the long run, both sides will win, because we need the best of what both bring to the world’s big round table. In the past, this has happened in countless markets, countries, polities, societies, and other contexts. In tech it happened with the conflict between PCs and mainframes, between the open and public Internet and closed private networks, and between open operating systems and closed ones. None of those conflicts have final outcomes, but they have arrived at stable and cooperative states. They arrived at those states because everyone recognized that people can do more with their own tools than those big entities can do on theirs, or by using people as extracted resources.

    This conflict is perhaps best represented by the ironies that have been at work within OpenAI and Sam Altman, its prior and current CEO. We can also see it with sparkling clarity in something amazing that Sam said near the close of his keynote at the first OpenAI Developer Conference, in November of last year:

    Lets unpack that.

    Individual empowerment means exactly what it says. Agency is the ability to take action or to choose what action to take (Cambridge Dictionary). Scale means growth toward ubiquity.

    You have individual empowerment and agency when you ride your bike, drive your car, mow your lawn, or button your shirt. You also have it when you work on your laptop, phone, and other smart rectangles. Scale means your power and agency will grow to the max while the same will happen for nearly everyone else. Scale for you alone won’t cut it. When most of us have it, that’s scale.

    Technologies extend us. They enlarge our capacities in the world. Intelligence is one of those capacities. So is memory. Your rectangles help a lot with both. That’s why those have already scaled to ubiquity, or close enough.

    AI will do the same, but only if it’s personal. Not if it’s  just “personalized.”

    As Jamie Smith made clear here a year ago, “your personal AI isn’t really personal.”

    ChatGPT4 doesn’t know that yet. I can tell because I just asked:

    Me:
    What is “personal AI”?

    ChatGPT4:
    “Personal AI” refers to artificial intelligence systems that are designed to serve individual users through personalized interactions and services. These AI systems learn from the specific preferences, behaviors, and needs of their users to offer more tailored assistance, recommendations, or information. The goal of personal AI is to enhance user experience by adapting to unique personal profiles.

    There are various applications of personal AI, including:

    1. Virtual Assistants: Like Siri, Google Assistant, and others, these AIs help with daily tasks such as scheduling, reminders, searching information, and controlling smart home devices, all customized to the user’s habits and preferences.
    2. Recommendation Systems: Used in streaming services like Netflix or shopping platforms like Amazon, these AI systems analyze your past behavior to recommend movies, shows, or products you might like.
    3. Healthcare: Personal AI can track individual health metrics and provide personalized advice, monitor conditions, or even alert medical professionals if needed.
    4. Learning and Development: In education, personal AI can adapt learning materials to the pace and learning style of the student, offering a customized educational experience.
    5. Gaming: AI in gaming can adapt to provide a challenge tailored to the player’s skill level or alter the game narrative based on choices made by the player.

    The development of personal AI focuses on enhancing user engagement by creating more intuitive and responsive interactions that feel uniquely suited to each individual.

    For a second opinion, I asked Google’s Gemini the same question.

    Me:
    What is “personal AI”?

    Gemini:
    Personal AI refers to artificial intelligence designed to assist and interact with a specific individual. It’s essentially your own AI companion that learns about you and tailors its services to your needs. Here are some key aspects of personal AI:

    1. Personalization: Unlike general-purpose AI assistants, personal AI focuses on you. It learns from your interactions, preferences, and habits to become more helpful over time.
    2. Focus on Memory and Knowledge: Some personal AI systems aim to become an extension of your mind. They can store your memories, knowledge, and experiences, making them readily accessible and even usable to help with tasks like writing or communication.
    3. Varied Applications: Personal AI can have a wide range of applications. Imagine an AI that helps you:
    4. Organize your life: Manage schedules, appointments, and reminders.
    5. Boost productivity: Assist with tasks, research, and communication.
    6. Enhance creativity: Help you brainstorm ideas or generate creative content.
    7. Improve learning: Personalize learning experiences and provide targeted information.

    Privacy Concerns: Storing personal data on an AI raises concerns about privacy and security.

    Overall, personal AI holds promise for a future where technology seamlessly integrates with our lives, offering personalized assistance and support.

    To both companies, personal AI is a personalized service—from them. It’s not something you own and control. It’s not about individual empowerment and agency. It doesn’t extend you. It’s hired prosthesis: an extension of them rather than of you.

    Let’s name the category: AIaaS—AI as a Service.

    But hats off. OpenAI and Gemini do amazing jobs. So do lots of other AI services by wannabe giants. I use AI every day to improve what I write, to generate and fix images, to answer questions that old-fashioned search engines can’t answer or suck at. I even pay extra for some of it (such as ChatGPT4 and Adobe’s Creative Suite). And it seems they all get better, all the time, at everything. Won’t personalization be good enough, eventually?

    No. Because they aren’t you. They also aren’t yours, so they can’t extend you. They can seem to. But they don’t. They also don’t have access to most of your private life. Nor should they.

    But your private life could use some AI help. For example—

    • Schedules, including on your calendars, past and future
    • Health data, including all your medical reports, prescriptions, appointments, insurance information, past and present providers, plus what your watch, phone, and other devices record about you
    • Financial records, including bills, receipts, taxes, and anything called an account that involves money
    • Travel, including all the movements your phone (and phone company), watch, and car record about where you go and where you’ve been
    • Work—past and present, including whatever HR systems know or knew about you
    • Contacts—all the people, businesses, and other entities you know
    • Business relationships, with brokers, retailers, service providers, whatever
    • Subscriptions, including all those “just $1 for the first four weeks” offers you’ve accepted, plus other forms of screwage that are stock-in-trade for companies selling subscription systems to businesses.
    • Property, including all the stuff on your shelves, floors, closets, garages, and storage spaces—plus your stocks and real estate.

    It’s not easy to visualize what a personal AI might do for those, but let’s try. Here’s how Microsoft’s Copilot (or whatever it’s called this week) did it for me before I got rid of all its misspellings and added my own hunks of text:

    All that stuff is data. But most of it is scattered between apps and clouds belonging to Apple, Google, Microsoft, Amazon, Meta, phone companies, cable companies, car makers, health care systems, insurance companies, banks, credit card companies, retailers, and other systems that are not yours. And most of them also think that data is theirs and not yours.

    To collect and manage all that stuff, you need tools that don’t yet exist: tools that are yours and not theirs. We could hardly begin to imagine those tools before AI came along. Now we can.

    For example, you should be able to take a picture of the books on your shelves and have a complete record of what those books are and where you got them. You’ll know where you got them because you have a complete history of what you bought, where and from whom. You should be able to point your camera in your closets, at the rugs on your floors, at your furniture, at the VIN number of your car that’s visible under your windshield, at your appliances and plumbing fixtures, and have your AI tell you what those are, or at least make far more educated guesses than you can make on your own.

    Yes, your AI should be able to tap into external databases and AI systems for help, but without divulging identity information or other private data. Those services should be dependent variables, not independent ones. For full individual empowerment and agency, you need to be independent. So does everyone else with personal AI.

    Now imagine having a scanner that you can feed every bill, every receipt, every subscription renewal notice, and have AI software that tells you what’s what with each of them, and sorts records into the places they belong.

    Ever notice that the Amazon line items on your credit card bill not only aren’t itemized, but don’t match Amazon’s online record of what you ordered? Your personal AI can sort that out. It can help say which are business and personal expenses, which are suspicious in some way, what doesn’t add up, and much more.

    Your personal AI should be able to answer questions like, How many times have I had lunch at this place? Who was I with? When was it we drove to see so-and-so in Wisconsin? What route did we take? What was that one car we rented that we actually liked?

    Way back in 1995, when our family first got on the Internet over dial-up, using the first graphical browsers on our PC, and e-commerce began to take off with Amazon, eBay, and other online retailers, my wife asked an essential question: Why can’t I have my own shopping cart that I take from site to site?

    Twenty-nine years later, we still don’t have the answer, because every retailer wants you to use its own. And we’re stuck in that system. It’s the same system that has us separately consenting to what sites ironically call “your privacy choices.” And aren’t.

    There are countless nice things we can’t have in the digital world today because we aren’t people there. We are accounts. And we are reduced to accounts by every entity that requires a login and password.

    This system is a legacy of client-server, a euphemism for slave-master. We might also call it calf-cow, because that’s how we relate to businesses with which we have accounts. And that model is leveraged on the Web like this:

    We go to sites for the milk of content and free cookies, whether we want them or not. We are also just “users.”

    In the client-server world, servers get scale. Clients have no more scale than what each account—each cow—separately allows. Sure, users get lots of benefits, but scale across many cows is not one of them. And no, “login with Google” and “login with Facebook” are just passes that let calves of ruling cows wander into vassal pastures.

    For individual empowerment and scale to happen, we need to be self-sovereign and independent. Personal AI can give that to us. It can do that by solving problems such as the ones listed above, and by working as agents that represent us as human beings—rather than mere users—when we engage with Big Tech’s cows.

    This will be a fight at first, because the cows think they run all of nature and not just their own farms. And $trillions are being invested in the same old cattle industry, with AI painted all over the new barns. Comparatively speaking, close to nothing is going toward giving independent and self-sovereign individuals the kind of power and scale Sam Altman says he wants to give us but can’t because he’s on the big cow side of this thing.

    So where do we start?

    First, with open source code and open standards. We have some already. Llama 3, from Meta AI, is “your own intelligent assistant,” and positions Meta as a more open and user-friendly cow than OpenAI. Meta is still on the top-down Big Tech side of the war we’re in. But hell, we can use what they’ve got. So let’s play with it.

    Here on the ground there are all these (with quotage lifted from their sites or reviews such as this one)—

    • MindsDB: “an open-source AutoML framework”
    • Alt.ai: “It’s an A.I. which aims to digitize users’ intentions and place it on the cloud to let our clones deal with all digital operations.”
    • Keras: “a multi-backend deep learning framework, with support for JAX, TensorFlow, and PyTorch”
    • PyTorch: “Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration, and Deep neural networks built on a tape-based autograd system
    • Tensor Flow: “open-source framework for machine learning”
    • CoreNet: a deep neural network toolkit for small and large-scale models, from Apple
    • Haystack: an “open source Python framework by deepset for building custom apps with large language models (LLMs).”
    • Image Super-Resolution (ISR): “(an) open source tool employs a machine learning model that you can train to guess at the details in a low-resolution image:
    • Blender: “A rich interface and numerous plugins make it possible to create complex motion graphics or cinematic vistas”
    • DeepFaceLab: “open source deepfake technology that runs on Python”
    • tflearn: “an advanced deep learning library”
    • PYTensor: “a Python library that allows you to define, optimize/rewrite, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.” (Was Theano)
    • LM Studio: “Discover, download, and run local LLMs”
    • HuggingFace Transformers: “a popular open-source library for Natural Language Processing (NLP) tasks”
    • Fast.ai: “a library for working with deep learning tasks”
    • OpenCV: “a popular Computer Vision and Image Processing library developed by Intel”
    • Detectron2: “a next-generation library that provides advanced detection and segmentation algorithm” and “a PyTorch-based modular object detection library”
    • Ivy.ai: “an open-source deep learning library in Python focusing on research and development”
    • Morpheus: “Decentralized AI—The first peer-to-peer network for general purpose AI, powered by MOR”
    • OpenAssistant: “a project aimed at giving everyone access to a great chat-based large language model”
    • PaddleNLP: “a popular open source NLP library that you can use to glean search sentiment and flag important entities”
    • Delphi.AI: “Clone yourself. Build the digital version of you to scale your expertise and availability, infinitely.”
    • Fauxpilot: “This is an attempt to build a locally hosted alternative to GitHub Copilot. It uses the SalesForce CodeGen models inside NVIDIA’s Triton Inference Server with the FasterTransformer backend.”
    • Ray: “An open source framework to build and scale your ML and Python applications easily”
    • Solid: “Solid is a specification that lets individuals and groups store their data securely in decentralized data stores called pods Wallets. These are like secure web servers for data. When data is stored in a Solid wallet, its owners control which people and applications can access it.”
    • Sagen.ai: “Your very own AI Personal Assistant to manage your digital life.”
    • YOLOv7: “is one of the fastest and most accurate open source object detection tools. Just provide the tool with a collection of images full of objects and see what happens next.”

    —and lots of others that readers can tell me about. Do that and I will add links later. This is a work in progress.

    Below all of those we still need something Linux-like that will become the open base on which lots of other stuff runs. The closest I’ve seen so far is pAI-OS, by Kwaai.ai, a nonprofit I now serve as Chief Intention Officer. I got recruited by Reza Rassool, Kwaai’s founder and chair, because he believes personal AI is required to make The Intention Economy finally happen. So that was a hard offer to refuse. Kwaai also has a large, growing, and active community, which I believe is necessary, cool, and very encouraging.

    As with most (maybe all) of the projects listed above, Kwaai is a grass-roots effort by human beings on the natural, human, and horizontal side of a battle with giants who would rather give us personalized AI than have us meet them in a middle to which we will bring personal AI powers of our own. In the long run, we will meet in that middle, because personal AI will be better for everyone than personalized AI alone.

    Watch us prove it. Better yet, join the effort.


    *I am indebted to Lavonne Reimer for introducing and co-thinking the horizontal vs. vertical frame, and look forward eagerly to her own writings and lecturings on the topic.

  • Burning to Write. And Vice Versa.

    I shot this photo in July 2007, while sitting with friends in that same booth at the Eagle and Child in Oxford.

    Among all artists, writers alone suffer the illusion that the world needs to hear what they have to say.

    I thought that line, or something like it, came from Rollo May, probably in The Courage to Create. But a search within that book says no. ChatGPT and Gemini both tell me May didn’t say it anywhere.

    Still, I think it’s true that writers write because they have to. They can’t not write. It’s what they do, how they live. And I’m one of them.

    The need to write is for me a vivid fact this morning, a few days past a catheter ablation on my heart. There is so much I need to say, because I suffer the illusion that the world needs to hear it. Does it?

    I am sure C.S. Lewis and J.R.R. Tolkein had the same affliction. I am also sure that the world is better for having read both of them, even if the world could have lived without their writing.

    As for time, I have had twelve more years to write than Lewis got, so far, and five less than Tolkein.

    Time to say what?

    I want to say that personal AI will do far more for all of us than what we will ever get from AI as a corporate service. And to say it better than I just did.

    I want to say that we will do better with rulers who care about people than with rulers who merely rule. And to say that better than I just did.

    I want to complete the work of John McPhee by reporting as best I can what has happened to the great characters that anchored every one of his essays and books. But that project is not on the back burner. It’s in the fridge, where I’ve kept it for decades (while continuing to read the entire McPhee oeuvre, much of it repeatedly).

    Speaking of burning, I am impelled by Dylan Thomas, who wrote “Do not go gentle into That Good Night,” before dying at just thirty-nine. The poem was for his father:

    Old age should burn and rave at close of day,
    Rage, rage against the dying of the light.

    And so I do.

  • World Wide Whiteboard

    We thought the Web was a library. Turns out it’s not. (Image via ChatGPT 4o.)

    Before there were search engines, there were directories. The biggest and best-known was Yahoo. On the first graphical browser (Mosaic), it looked like this:

    The directory idea made sense, because the Web is laid out like the directory in your computer. There is a “domain” with a “location” or a “site,” containing something after the last / in a path of /something/something/something. Geeks call these directories too, and the string of somethings a path. Directories in computing also follow an outline format: heads, subheads, and subheads of subheads.

    While this design is boundlessly flexible, it also suggests durability, if not permanence, because it’s good to find stuff where it rightly goes and to find it in the same place over and over again.

    That was what Yahoo assumed in the early days of the Web—as did everyone who bought a domain name. I’ve had searls.com since 1995. Dave Winer (father of outlining and progenitor of much else) has had Scripting.com for even longer (and has a lot more in that domain).

    But we don’t own domain names. We rent them. And the World Wide Web isn’t a library. It’s a whiteboard with stuff written on it. Some of that stuff is located on directory paths. A lot more is coughed up by database systems on an as-needed basis.

    The Yahoo directory failed. In its place search engines appeared. Search engines don’t catalog the Web like a library might. They index it. That means they send crawlers down the Web’s directory paths, recording everything they see into a searchable index. I explain here how that works and where this went:

    The Web is a haystack.

    This isn’t what Tim Berners-Lee had in mind when he invented the Web. Nor is it what Jerry Yang and David Filo had in mind when they invented Jerry and David’s Guide to the World Wide Web, which later became Yahoo. Jerry and David’s model for the Web was a library, and Yahoo was to be the first catalog for it. This made sense, given the prevailing conceptual frames for the Web at the time: real estate and publishing.

    Both of those are still with us today. We frame the Web as real estate when we speak of “sites” with “locations” in “domains” with “addresses” you can “visit” and “browse”—then shift to publishing when we speak of “files” and “pages,” that we “author,” “edit,” “post,” “publish,” “syndicate” and store in “folders” within a “directory.” Both frames suggest durability if not permanence. Again, kind of like a library.

    But once we added personal movement (“surf,” “browse”) and a vehicle for it (the browser), the Web became a World Wide Free-for-all. Literally. Anyone could publish, change and remove whatever they pleased, whenever they pleased. The same went for organizations of every kind, all over the world. And everyone with a browser could find their way to and through all of those spaces and places, and enjoy whatever “content” publishers chose to put there. Thus the Web grew into billions of sites, pages, images, databases, videos, and other stuff, with most of it changing constantly.

    The result was a heaving heap of fuck-all.*

    Back in 2005, I wrote in Linux Journal about a split between the “static” Web that was like a library (with its “locations,” “sites,” and “domains” you could “visit” and “browse”), and the “live” Web of blogs and posts. Then social media came along, and the live branch of the Web outgrew the static Web’s trunk.

    Last week came news that a leak revealed lots of interesting poop about how Google actually ranks search results. Here are two things I don’t need those leaked documents to tell me:

    1. Google favors the present over the past, the current over the archival.
    2. Google no longer indexes, or ranks very old Web pages.

    I speak from experience here, because I have some old pages that are on the Web but don’t seem to be indexed, meaning searches don’t find them. I also have Easter eggs on a couple of those pages: words that exist in no language but made those pages easy to find when I did keyword searches for them. Now I get “No results found for _____.” (I won’t reveal the word because I want to keep testing Google.)

    Countless publications have also come and gone on the Web without leaving a trace. Upside was a gigantic publication from the Nineties through the dotcom boom. Not a trace of it remains. Far as I know, nothing remains of Fast Company‘s early issues.

    But hey, God bless the Internet Archive. Here’s a piece I wrote for PC Magazine in December 1982 about a PC application that taught card counting in blackjack:

    As the evanescence of “content” increases, so does the importance of archives.

    So maybe stop reading here and start reading here. We have a lot of work to do.

  • Archives as Commons

    Tenth in the News Commons series.

    The Santa Barbara library, viewed from the county courthouse. Is this where the dead local paper’s archives will go? How about future archives of all the local news organs?

    The Santa Barbara News-Press was born in 1868 and died in 2023 at age 155. Its glory years ran from 1932 until 2000, when the New York Times sold it to Wendy McCaw, who rode it to hell.

    That ride began with the Santa Barbara News Press Controversy in 2006 and ended when Ampersand, the company McCaw created to hold the paper’s bag of assets (which did not include its landmark building downtown, which McCaw kept), filed for Chapter 7 bankruptcy in late July of last year. Here are stories about the death of the paper in three local news journals that have done a great job of taking up the slack left when the News-Press began to collapse, plus one in the LA Times:

    I followed those with this in We Need Deep News:

    From what I’ve read so far (and I’d love to be wrong) none of those news reports touch on the subject of the News-Press‘ archives, which conceivably reach back across the century and a half it was published. There can’t be a better first draft of history for Santa Barbara than that one. If it’s gone, the loss is incalculable. (August 18 2023)

    Last month brought bad news about that:

    But then, thanks to William Belfiore’s appeal in that last piece, we learned this:

    The only mention of archives was in the closing sentences of that piece:

    The purchase of the website included the Santa Barbara News-Press trademark, which would be important to the groups looking at the physical archive of back issues, photographs, and clippings by topic. Romo, who was once a paper boy for the daily, acknowledged that his group was supportive of the archive remaining local, too.

    I don’t know what that means, and I haven’t checked. But I am sure that the archives ought to be managed by the community as a common pool resource.

    As it happens, my wife and I are visiting scholars at the Ostrom Workshop at Indiana University, which is concerned with this kind of thing, because its namesake, Elinor Ostrom, won a Nobel Prize in Economics for her work on how commons are self-governed. In her landmark book, Governing the Commons: The Evolution of Institutions for Collective Action, she lists eight principles for managing a commons, which are summarized here:

    1. Define clear group boundaries.
    2. Match rules governing use of common goods to local needs and conditions.
    3. Ensure that those affected by the rules can participate in modifying the rules.
    4. Make sure the rule-making rights of community members are respected by outside authorities.
    5. Develop a system, carried out by community members, for monitoring members’ behavior.
    6. Use graduated sanctions for rule violators.
    7. Provide accessible, low-cost means for dispute resolution.
    8. Build responsibility for governing the common resource in nested tiers from the lowest level up to the entire interconnected system.

    Journalists, especially those who report news, are not herding animals. They tend to be competitive and territorial by both nature and purpose. So the collection of news entities I wrote about in We Need Wide News and We Need Whole News will almost certainly not cohere into a commons such as Lin (her nickname) Ostrom addresses in that list.

    But they should cohere around archives—not only because that’s the right thing to do, but because they need those archives. We all do.

    So I hope Santa Barbara’s many journals, journalists, friends, supporters, and interested local institutions get together around this challenge. Build a commons around those archives, whatever and wherever they happen to be.

    Meanwhile here in Bloomington, my other hometown, we are pushing forward with The Online Local Chronicle that Dave Askins wrote about in the previous installment in this series. We might call that a commons interest here.

     

     

  • This Thing is Bigger Than Journalism

    Journalism as we knew it is washing away. But the story is bigger than journalism alone, and bigger than a story alone can tell. (Image borrowed from the brilliant Despair.com.)

    We who care about journalism are asked to join the Save Journalism Project, and its fight against Big Tech. Their pitch begins,

    and adds,

    On the first point, we should note that journalists have been working for magazines, broadcasters, newsletters and themselves for many dozens of years. So journalism isn’t just about newspapers. Also, because so many journalists have long made livings in those other media, the loss of work is far greater than the 2,400 gone from newspapers. It’s truly massive. I don’t know any field where the loss of paying jobs is larger on a percentage basis. Not taxi driving, not hospitality, not retail, not manufacturing… not anything I can think of. (Well, maybe nuns. I don’t see many of those these days.)

    We should also respect the simple fact that now there is more journalism than ever: in blogs, social media, podcasting, and other places. Most of those kinds of journalism don’t pay, but that doesn’t disqualify the work from the label. Hell, I’m committing journalism here and this doesn’t pay.

    “The story of big tech’s threat to journalism” (what the Project wants us all to tell) is also something of a red herring because it distracts our attention from causes much bigger than Big Tech.

    Every new technology “works us over completely,” Marshall McLuhan says (in The Medium is the Massage). And no new medium, no new technologies, have ever worked us more than the digital kind. The change began with digital tech and integrated circuits and then went absolute with the Internet. Together, digital technologies and the Internet have radiacally changed our species, our civilization, and our planet.

    Not long ago, in a conversation about this with Joi Ito, I asked him how big he thought the digital transformation was. Bigger than broadcast? Print? Writing? Speech? Stone tools?

    “No,” he replied. “It’s the biggest thing since oxygenation.” In case you don’t remember, that happened between about two and a half billion years ago. (Joi also writes about it here.)

    So, while journalism matters enormously, it’s just one casualty of digitalization. And, let’s face it, a beneficiary as well. Either way, we need to understand the whole picture, which is about a lot more than what journalism sees happening in the mirror.

    Here’s one outfit working on that bigger picture. I‘m involved with it.

    I also don’t expect most journalists to take much interest in the subject, because it’s too big, and it doesn’t make full sense as a story, which is journalism’s stock in trade. (I explain a bit about journalism’s “story problem” in this TEDx talk.)

    Still, some journalists are on the case, including me. Love to have others join in. But please don’t bother if you think Big Tech is alone to blame. Because the story is bigger than that, and far more than a story.


    I just copied and pasted this post from here in Medium, where I posted it in July 2019. It expands on a post now archived here. It’s kinda sad that not much has changed over all that time.

  • Aviation vs. Eclipse

    Contrails in the stratosphere, smearing sideways into broad cloud cover.  This view is toward the place in the sky where a full solar eclipse will happen a few hours later.

    Contrails form behind jet aircraft flying through the stratosphere. Since high-altitude aviation is happening all around the earth more or less constantly, planes are painting the sky everywhere. (Here is one time-lapse. And another. And one of my own.)

    Many contrails don’t last, of course, but many do, and together they account for much of the cloud cover we see every day. The altocumulus, altostratus, and cirrus clouds that contrails produce are now officially recognized as homogenitus and homomutatus, which are anthropogenic: owing to human activity.

    And today, Eclipse Day, Delta is offering to fly you along the path of totality. Others too? I don’t know. I’m taking a few moments to write this before we walk up to our hilltop cemetery to watch the eclipse for over four minutes, thanks to our lucky location near the very center of Totality.

    I’m curious to see and hear contrail reports from others now awaiting their few minutes out of the sun.

    1:14pm—The moon’s shadow made landfall in Mexico a short time ago. Here in Bloomington, the sky is well-painted by contrails. Mostly it looks like high-altitude haze, but believe me: if it weren’t for commercial aviation, the sky would be solid blue. Because the contrails today are quickly smeared sideways, losing their form but not their color.

    5:00pm—Contrails were aplenty, and a spread-out contrail did slide in front of the sun and the moon…
    eclipse

    but it was still a spectacular sight:

    eclipse

  • Talking Artificial Intelligence with the Real Don Norman

    Artificial is AI’s frst name. And Intelligence is a quality, not a quantity. You can’t measure it with a dipstick, a ruler, or an IQ test. If you could, you’d get the same result every time.*

    But being artificial doesn’t mean AI isn’t dangerous, fun or both. It is, and will be, what we make of it.

    That’s what Don Norman says, and he’s been publishing in AI journals since 1973. His laboratory produced the first multi-layer neural nets in the 1980s. He wrote Things that Make us Smart in 1993.

    In the opinion of myself and countless others, Don is also the foremost authority on design—of anything and everything. For more on that, check out Don’s Web page, his Wikipedia page, and his books. Or, if you just want to sample some of his thoughts on AI, watch this.

    Or you can skip all that and come to the good stuff: joining us in a talk with Don in the final salon of this semester on the topic of Artificial +/vs. Human Intelligence. It’s next Tuesday, April 9, at Noon Eastern time. (That’s less than 24 hours after the shadow of the Moon passes over the Indiana University campus. Yes, totality will be local here.)

    Also, this won’t be a lecture or a presentation. It will be a lively discussion because Don is especially good at that.

    It’s also free and online, but you have to register first. Do that here.


    *For what it’s worth, my own known IQ test scores have an 80-point range. I’ve written about that, and the myth of “IQ” here, here, here, here, and I suppose in too many other places.

  • Fishing For Free TV Signals

    By expert acclaim, this is the best antenna for receiving hard-to-get over-the-air (OTA) TV signals

    I think I will be the last person in Bloomington to try getting free over-the-air TV from what’s left of all the major networks. But that’s just my style, so roll with me while I explain how I’m hoping to do it, with the antenna above, which I’ll need because here is what the Search Map at RabbitEars.info says we might get here:

    We live next door right now, and the top station above, WTIU from Indiana University (our PBS affiliate), comes from a tower you can walk to from here. We can get that signal by using a straightened paper clip for an antenna. (You jam the clip into the center hole of the coaxial connector in the back of the TV.) Even a real indoor antenna connected to the same jack gets nothing else, not even the two stations above with “Fair” signal strength.

    But this Televes antenna might do the job because we’re on the slope of a hill that faces the Indianapolis stations that carry CBS (WTTV/4 on 27), ABC (WRTV/6 on 25), NBC (WTHR/13 on 13), and Fox (WRDB/41 on 32)*. These range from 27 to 54 miles away, in roughly the same direction. VHF and UHF signals always gain strength when they hit the faces of hills, similar to how surf builds as it approaches a sand bar or a shore. Also, the Televes DAT BOSS antenna gets great reviews:

    I was going to put it in our new attic before the drywall goes up. However, the attic space is low and full of close cross-braces. Worse, the antenna is not small and kinda complicated to fit in a space that’s a web of short 2x4s. Dig:

    So it will go on a pole in the backyard and feed a coaxial line that will tunnel through conduit under the yard and inside to the new living room.

    But I would like to test it first, preferably with a tuner gizmo I can plug into my laptop. I had one of those for years: the Elgato EyeTV Hybrid TV Tuner stick, which looked like a fat thumb drive,with USB-A at one end and a coax connector for an antenna at the other. It was sold in the ’00s and picked up both analog and digital TV (the Digital Transition was happening then), on every North American channel, and came with good software that ran on Macs and operating systems that have long been abandoned. Far as I can tell there are no replacements that run on current hardware or operating system, other than this one sold in Europe. Far as I can tell, it only works on TV bands over there. But I could be wrong. If anybody knows of a gizmo/softward combo I can use, please tell me. My only other option is to buy or find a cheap TV and try that out. Any advice is welcome. Thanks!


    *After the digital transition in 2008, and again with the “repack” after 2016, most TV stations moved onto channels other than their original ones, using less spectrum overall. All the TV channels above 36 were auctioned off, first in 2008 and again in 2018. Most buyers were cellular and other short-range wireless carriers, which have been repurposing the old TV spectrum for 5G and other modern uses. The only station in Indianapolis that didn’t move its channel position was WTHR/13. That one is listed in the chart above as one of the “bad” signals for this location. The Televes antenna is designed specifically for “high band” VHF (channels 7-13) and the remaining UHF (14 to 36) TV channels. It also filters out any 5G signals that the antenna might pick up on what used to be the higher UHF channels. By the way, the old “low band” VHF channels (2 to 6) are still in use in some places, but by very few TV stations.  So it’s not worth it for Televes to design an antenna to pick those channels up. Such an antenna would also be a lot bigger and longer because the low-band elements of the antenna would be much longer.

  • Feed Time

    I asked ChatGPT to give me “people eating blogs” and got this after it suggested some details.

    Two things worth blogging about that happened this morning.

    One was getting down and dirty trying to make DALL-E 3 work. That turned into giving up trying to find DALL-E (in any version) on the open Web and biting the $20/month bullet for a Pro account with ChatGPT, which for some reason maintains its DALL-E 3 Web page while having “Try in ChatGPT↗︎” on that page link to the ChatGPT home page rather than a DALL-E one. I gather that the free version of DALL-E is now the one you get at Microsoft’s Copilot | Designer, while the direct form of DALL-E is what you get when you prompt ChatGPT (now 4.0 for Pro customers… or so I gather) to give you an image that credits nothing to DALL-E.

    The other thing was getting some great help from Dave Winer in putting the new Feedroll category of my Feedland feeds placed on this blog, in a way similar stylistically to old-fashioned blogrolls (such as the one here). You’ll find it in the right column of this blog now. One cool difference from blogrolls is that the feedroll is live. Very cool. I’m gradually expanding it.

    Meanwhile, after failing to get ChatGPT or Copilot | Designer to give me the image I needed on another topic (which I’ll visit here later) I prompted them to give me an image that might speak to a feedroll of blogs. ChatGPT gave me the one above, not in response to “people eating blogs” (my first attempt), but instead to “People eating phone, mobile and computer screens of type.” Microsoft | Designer gave me these:

    Redraw your own inconclusions.

  • Death is a Feature

    When Parisians got tired of cemeteries during the French Revolution, they conscripted priests to relocate bones of more than six million deceased forebears to empty limestone quarries below the city: a hundred miles of rooms and corridors now called The Catacombes. It was from those quarries that much of the city’s famous structures above—Notre Dame, et. al.—were built in prior centuries, using a volume of extracted rock rivaling that of Egypt’s Great Pyramids. That rock, like the bones of those who extracted it, was once alive. In the shot above, shadows of future fossils (including moi) shoot the dead with their cell phones.

    Elon Musk wants to colonize Mars.

    This is a very human thing to want. But before we start following his lead, we might want to ask whether death awaits us there.

    Not our deaths. Anything’s. What died there to make life possible for what succeeds it?

    From what we can tell so far, the answer is nothing.

    To explain why life needs death, answer this: what do plastic, wood, limestone, paint, travertine, marble, asphalt, oil, coal, stalactites, peat, stalagmites, cotton, wool, chert, cement, nearly all food, all gas, and most electric services have in common?

    They are all products of death. They are remains of living things or made from them.

    Consider this fact: about a quarter of all the world’s sedimentary rock is limestone, dolomite and other carbonates: remains of beings that were once alive. The Dolomites of Italy, the Rock of Gibraltar, the summit of Mt. Everest, all products of death.

    Even the iron we mine has a biological source. Here’s how John McPhee explains it in his Pulitzer-winning Annals of the Former World:

    Although life had begun in the form of anaerobic bacteria early in the Archean Eon, photosynthetic bacteria did not appear until the middle Archean and were not abundant until the start of the Proterozoic. The bacteria emitted oxygen. The atmosphere changed. The oceans changed. The oceans had been rich in dissolved ferrous iron, in large part put into the seas by extruding lavas of two billion years. Now with the added oxygen the iron became ferric, insoluble, and dense. Precipitating out, it sank to the bottom as ferric sludge, where it joined the lime muds and silica muds and other seafloor sediments to form, worldwide, the banded-iron formations that were destined to become rivets, motorcars and cannons. The is the iron of the Mesabi Range, the Australian iron of the Hammerslee Basin, the iron of Michigan, Wisconsin, Brazil. More than ninety percent of the iron ever mined in the world has come from Precambrian banded-iron formations. Their ages date broadly from twenty-five hundred to two thousand million years before the present. The transition that produced them — from a reducing to an oxidizing atmosphere and the associated radical change in the chemistry of the oceans — would be unique. It would never repeat itself. The earth would not go through that experience twice.

    Death produces building and burning materials in an abundance that seems limitless, at least from standpoint of humans in the here and now. But every here and now ends. Realizing that is a vestigial feature of human sensibility.

    Take for example, The World Has Plenty of Oil, which appeared in The Wall Street Journal ten years ago. In it, Nansen G. Saleri writes, “As a matter of context, the globe has consumed only one out of a grand total of 12 to 16 trillion barrels underground.” He concludes,

    The world is not running out of oil any time soon. A gradual transitioning on the global scale away from a fossil-based energy system may in fact happen during the 21st century. The root causes, however, will most likely have less to do with lack of supplies and far more with superior alternatives. The overused observation that “the Stone Age did not end due to a lack of stones” may in fact find its match.

    The solutions to global energy needs require an intelligent integration of environmental, geopolitical and technical perspectives each with its own subsets of complexity. On one of these — the oil supply component — the news is positive. Sufficient liquid crude supplies do exist to sustain production rates at or near 100 million barrels per day almost to the end of this century.

    Technology matters. The benefits of scientific advancement observable in the production of better mobile phones, TVs and life-extending pharmaceuticals will not, somehow, bypass the extraction of usable oil resources. To argue otherwise distracts from a focused debate on what the correct energy-policy priorities should be, both for the United States and the world community at large.

    In the long view of a planet that can’t replace any of that shit, this is the rationalization of a parasite. That this parasite can move on to consume other irreplaceable substances it calls “resources” does not make its actions any less parasitic.

    Or, correctly, saprophytic; since a saprophyte is “an organism which gets its energy from dead and decaying organic matter.”

    Moving on to coal, the .8 trillion tons of it in Wyoming’s Powder River Basin now contributes 40% of the fuel used in coal-fired power plants in the U.S. Here’s the biggest coal mine in the basin, called Black Thunder, as it looked to my camera in 2009:

    About half the nation’s electricity is produced by coal-fired plants, the largest of which can eat the length of a 1.5-mile long coal train in just 8 hours. In Uncommon Carriers, McPhee says Powder River coal at current rates will last about 200 years.

    Then what? Nansen Saleri thinks we’re resourceful enough to get along with other energy sources after we’re done with the irreplaceable kind.

    I doubt it.

    Wind, tide, and solar are unlikely to fuel aviation, though I suppose fresh biofuel might. Still, at some point, we must take a long view, or join our evolutionary ancestors in the fossil record faster than we might otherwise like.

    As I fly in my window seat from place to place, especially on routes that take me over arctic, near-arctic, and formerly arctic locations, I see more and more of what geologists call “the picture”: a four-dimensional portfolio of scenes in current and former worlds. Thus, when I look at the seashores that arc eastward from New York City— Long Island, Block Island, Martha’s Vineyard, Nantucket, Cape Cod—I see a ridge of half-drowned debris scraped off a continent and deposited at the terminus of an ice cap that began melting back toward the North Pole only 18,000 years ago—a few moments before the geologic present. Back then, the Great Lakes were still in the future, their basins covered by ice that did not depart from the lakes’ northern edges until about 7,000 years ago or 5,000 B.C.

    Most of Canada was still under ice while civilization began in the Middle East and the first calendars got carved. Fly over Canada often enough and the lakes appear to be exactly what they are: puddles of a recently melted cap of ice. Same goes for most of the ponds around Boston. Every inland swamp in New England and upstate New York was a pond only a few dozen years ago, and was ice only a dozen or so centuries before that. Go forward a few thousand years and all of today’s ponds will be packed with accumulated humus and haired over by woods or farmland. In the present, we are halfway between those two conditions. Here and now, the last ice age is still ending.

    As Canada continues to thaw, one can see human activity spark and spread across barren lands, extracting “resources” from ground made free of permafrost only in the last few years. Doing that is both the economic and the pestilential thing to do.

    On the economic side, we spend down the planet’s principal, and fail to invest toward interest that pays off for the planet’s species. That the principal we spend has been in the planet’s vaults for millions or billions of years, and in some cases cannot be replaced, is of little concern to those spending it, which is roughly all of us.

    Perhaps the planet looks at our species the same way and cares little that every species is a project that ends. Still, in the meantime, from the planet’s own one-eyed perspective, our species takes far more than it gives, and with little regard for consequences. We may know, as Whitman put it, the amplitude of time. We also tend to assume in time’s fullness all will work out.

    But it won’t.

    Manhattan schist, the bedrock anchoring New York City’s tallest buildings, is a little over half a billion years old. In about the same amount of time, our aging Sun, growing hotter, will turn off photosynthesis. A few billion years later, the Sun will swell into a red giant with a diameter wider than Earth’s orbit, roasting the remains of our sweet blue planet and scattering its material out into the cosmos, perhaps for eventual recycling by stars and planets not yet formed.

    In a much shorter run, many catastrophes will happen. One clearly is what our species is already doing to the planet during what geologists correctly call the Anthropocene. I suppose that’s a good reason for Elon and crew to “save” a few members of our vain little species. But why fuck up Mars before we’re done fucking up Earth, when there’s still some leverage with the death we have at home and that Mars won’t begin to have until stuff dies on it?

    I’ve always been both an optimist and a realist. Specifically, I’m an optimist for at least the short run, by which I mean the next few dozen years. But I’m a pessimist for our civilization — and our species. Death is always a winning bet.

    But hey, maybe nature knows better what to do with us than we do.

    The first ancestor of this piece appeared in blogs.harvard.edu on 4 March 2008. The second is here on Medium.

  • Looking for DALL-E 3 Help

    dall-e 3 header
    I just returned to DALL-E 3 after using its Microsoft version (currently called Copilot | Designer) for a while. But I can’t get in.

    See how it says “Try in ChatGPT↗︎?” When I do that, it goes to https://chat.openai.com/. After I log in there, it offers no clue about where DALL-E 3 is. So I asked, “How do I get to the DALL-E 3 page after I’m logged in here?”

    ChatGPT answered, “As of my last update in January 2022, DALL-E 3 is a hypothetical iteration of the DALL-E model, and there’s no information about its existence or how to access it. If it were to be released, you might typically find access through OpenAI’s website or platform if they make it available to the public. However, without further updates or announcements, it’s challenging to provide specific instructions. If OpenAI releases DALL-E 3 or any similar model in the future, you can check their official website or announcements for information on how to access it.”

    Additional useless dialog followed. So now I’ll ask the Lazyweb. Does anyone know how to get into DALL-E 3?