What’s the Point of Reading Writing by Humans?

Maybe one day journalism could be replaced with an immense surveillance state with a GPT-4 plug-in. Why would we want that?
An illustration of pencil writing in binary code.
Illustration by Nicholas Konrad / The New Yorker

One of the stultifying but ultimately true maxims of the analytics movement in sports says that most narratives around player performance are lies. Each player has a “true talent level” based on their abilities, but the actual results are mostly up to variance and luck. If a player has, say, the true talent to hit thirty-one home runs in a season, the timing of those home runs is mostly random. If someone hits a third of those in April, that doesn’t really mean he’s a “hot starter” who is “building off a great spring”—it just means that if you take thirty-one home runs and toss them up in the air to land randomly on a time line, sometimes ten of them float over to April. What does matter, the analytics guys say, are plate appearances: you have to clock in enough opportunities to realize your true talent level.

For much of my career, I was the type of journalist who only published a handful of magazine pieces a year. These required a great deal of time, much of which was spent on minor improvements to the reporting, structure, and sentences. I believed that long-form journalism, much like fiction or poetry, possessed a near-mystical rhythm that could be accessed through months of intensive labor. Once unlocked, some spirit would sing through the piece and touch the readers in a universal, truthful way.

Then, about two years ago, while working at the New York Times, I began writing and publishing thousands of words a week. My main motivation was health care: I had been a contract employee for years and had very little income stability. But also, from an authorial standpoint, I was curious to see what would happen if I just started churning. Would my sentences deteriorate? Would I lose my sense for what was good or bad?

These questions came out of a perpetual restlessness on my part—perhaps the result of spending too much time online. I still believe that thoughtful, time-consuming journalism is the paragon of our industry, but, given my own peculiar beliefs about the writing process, I wondered if I might be better suited to being the type of writer who just writes all the time. I wasn’t convinced that I was getting enough plate appearances; I was being entirely too precious about what I thought was good or bad, and also far too narcissistic about my own ability to control the public’s reaction to my work.

I started to understand why so many great writers are obsessed with the repetition of a consistent work schedule: they are all trying to hit a number of words a day that may or may not contain something interesting in them. Songwriters work in a similar way—the best ones churn out hundreds of songs a year, and then one of them randomly becomes a hit. We’re all just trying to up our plate appearances.

Back in December, I wrote about ChatGPT, the large-language-model system created and endlessly hyped by OpenAI, and the implications it might have for writers. There’s been a great deal written lately (including by me) about whether we can dismiss the technology as a fad, and if it’s capable of doing anything besides a bad impression of the real stuff: GPT and L.L.M.s like it feed off an existing corpus of text, then an algorithm makes choices on what word will come next in a sequence.

This effectively means that GPT cannot exist without us. If you type any prompt into ChatGPT that involves a current event or something that’s happened in the recent past, it will often tell you that its history ends in November, 2021. Since what we call A.I. chatbots cannot actually “learn” anything new without human input, the only way it would know about, say, Ron DeSantis’s chances in the 2024 Republican primary, or what the crowd looked like at Coachella, would be if it were fed an article written by a human who, presumably, had made a personal assessment of what they saw and sat down in front of a keyboard.

This should be a relief for people like me, who write by observing things in the world and then recording our impressions. But I’m not sure we should relax quite yet. There’s no reason why an L.L.M. couldn’t access current polls and spin up a narrative, or even conduct its own polls. As for Coachella: OpenAI’s latest iteration of the model, GPT-4, can now describe an uploaded image, which means that, as long as some security cameras were installed around the grounds and a future L.L.M. had access to them, it probably could describe the crowd, and, with a few advances in video and audio processing, could even tell you what people were saying. If it had access to social media and millions of videos from the festival, it could probably give you a more expansive and detailed account than a human reporter. What’s left for us, then, are expressions of style, feeling, and the human mysteries of the creative process. I tend to be on the side of humans in these debates, but I also wonder if we might be getting a bit pious about our ability to create things that are truly original.

Writing, after all, is an outgrowth of reading. I have always been a reader, but I tend to get into ruts where I simply read the same passages over and over again. These include the opening pages of Ralph Ellison’s “Invisible Man” in which the narrator tells you about his cave of lights; Marcel Proust’s description of place names; Joan Didion’s expressions of pointed indifference in “Slouching Towards Bethlehem”; the scene in Denis Johnson’s “Jesus’ Son” where Fuckhead, the protagonist, stands outside a woman’s window; the introduction to Bruce Chatwin’s “In Patagonia” (with the dinosaur skin); some random sentences in Barry Hannah’s “Geronimo Rex”; Orwell’s matter-of-fact conclusions in “Reflections on Gandhi”; every word of James Baldwin’s “Notes of a Native Son”; the last stanza of Marilyn Hacker’s poem “For K. J., Leaving and Coming Back,” which reads “Although a day alone cuts tight or lies/too limp sometimes, I know what/I didn’t know/a year ago, that makes it the right/ size:/owned certainty; perpetual/surprise”; the list of items in Zooey Glass’s bathroom in J.D. Salinger’s “Franny and Zooey”; the postscript to Borges’s story “Tlon, Uqbar, Orbis Tertius,” as well as his love letter titled “Delia Elena San Marco.”

I’ve come to realize that I function like a more curated but less efficient version of GPT. My sentences are not generated by A.I., but they are largely the synthesis of my favorite authors. I can tell you which phrases have been borrowed from Borges, which authorial intrusions are taken from Chatwin, and how much of the supposed perspective is just an approximation of what I think Orwell might think about all of this. My “voice” is drawn from the corpus of those books. Perhaps there is some spiritual self in all this text, but if there is, I cannot locate it. Inspiration in writing, instead, feels to me like what happens when you give a machine enough opportunities to spit out something interesting that can take on its own, arguably false narrative about a “moment of genius.”

I asked the program to rewrite the second paragraph of this piece in the style of The New Yorker. It seems like GPT decided to swap out some of the more commonly used words I typed with fustier synonyms, while also laying down a cadence that made it feel like the author was holding up a monocle as he typed.

Throughout the greater part of my tenure as a magazine journalist, I found myself in the role of a literary artisan who produced but a few masterpieces annually. Each creation demanded an extraordinary investment of time, devoted to the painstaking refinement of reporting, structure, and prose.

Then I asked it to rewrite the same lines in the style of Jay Caspian Kang.

For most of the long, winding road that’s been my writing career, I played the part of a magazine journalist who brought just a few choice pieces into the world each year. It was a slow dance, a careful tango with words, each step an exercise in refining the reporting, structure, and sentences.

None of this output makes me feel concerned about being outmatched by the model, but it does make me wonder if I’ve been asking the wrong questions about it. We can disagree on whether A.I. can generate writing that could be convincingly passed off as mine—I think that, eventually, it will be able to do this—but I think you and I can agree that neither of us would want to read an article with GPT-4 (or 5 or 6) on the byline. Surely some L.L.M. down the line will be able to compose a more accurate assessment of DeSantis’s chances at winning the Presidency than its human counterparts in the political-pundit trenches, but what would be the point of that? I enjoy reading human writing because I like getting mad at people. Perhaps the personal quality in writing is a happy accident, and a lot of journalism could be replaced with an immense surveillance state with a GPT-4 plug-in. But the reason we read books and listen to songs and look at paintings is to see the self in another self, or even to just see what other people are capable of creating.

Instead of focussing on the viability of GPT-4’s compositions and whether they’re “better” or “worse” than my own, or fixating on the cringey clichés it generates—“long, winding road,” “careful tango”—I should be asking you, the reader, if you will still want me after the machines take over, and you should ask me if I still want you. The answers to both, I hope, are still yes. ♦