The Limits of the AI-Generated 'Eyes on Rafah' Image

The “All Eyes on Rafah” image went viral on social media this week, with over 47 million shares on Instagram alone. It's bringing up tough questions about the power of the visuals of war and AI.
People standing by burned tents in a refugee camp
Palestinians observe the destruction caused by the Israeli army's attack on tents of displaced people living near the United Nations Relief and Works Agency for Palestine Refugees (UNRWA) warehouses in Rafah, Gaza on May 27, 2024.Photograph: Ashraf Amra/Getty Images

Years ago, when people still used Boolean search and I was a cub reporter, I worked with photographer Nick Ut at the Associated Press. It felt like being in the presence of one of the Greats, even though he never acted like it. We drank the same office coffee, even as I was barely out of journalism school and he had a Pulitzer Prize that was nearly three decades old. Ut, if you don’t recognize the name, took the photo of “Napalm Girl”—Kim Phuc, whom Ut captured in 1973, at 9 years old, running from a bombing in Vietnam.

Lots of people know that photo. It’s one of the most searing images to come out of the Vietnam War—one that shifted attitudes about the conflict. Ut himself wrote many years later that he knew a single photo could change the world. “I know, because I took one that did.”

Hundreds of photos have come out of the Israel-Hamas war since it began more than seven months ago. Bombed out buildings, mass funerals, damaged hospitals, more injured children. But, as of this week, there’s one that’s garnered more attention than most: “All eyes on Rafah.”

The image features what appears to be an AI-generated landscape in which a series of refugee tents spells out the image’s title phrase. The exact origins of the image are murky, but as of this writing it’s reportedly been shared more than 47 million times on Instagram, with many of those shares coming in the 48 hours after an Israeli strike killed 45 people in a camp for displaced Palestinians, according to the Gaza Health Ministry. The image was also shared widely on TikTok and X, where a pro-Palestine account’s post featuring the image has been viewed nearly 10 million times.

Instagram content

This content can also be viewed on the site it originates from.

As “All eyes on Rafah” circulated, Shayan Sardarizadeh, a journalist with BBC Verify, posted on X that it “has now become the most viral AI-generated image I’ve ever seen.” Ironic, then, that all those eyes on Rafah aren’t really seeing Rafah at all.

Establishing AI’s role in the act of news-spreading got fraught quickly. Meta, as NBC News pointed out this week, has made efforts to restrict political content on its platforms even as Instagram has become a “crucial outlet for Palestinian journalists.” The result is that actual footage from Rafah may be restricted as “graphic or violent content” while an AI image of tents can spread far and wide. People may want to see what’s happening on the ground in Gaza, but it’s an AI illustration that’s allowed to find its way to their feeds. It’s devastating.

The Monitor is a weekly column devoted to everything happening in the WIRED world of culture, from movies to memes, TV to Twitter.

Journalists, meanwhile, sit in the position of having their work fed into large-language models. On Wednesday, Axios reported that Vox Media and The Atlantic had both made deals with OpenAI that would allow the ChatGPT maker to use their content to train its AI models. Writing in The Atlantic itself, Damon Beres called it a “devil’s bargain,” pointing out the copyright and ethical battles AI is currently fighting and noting that the technology has “not exactly felt like a friend to the news industry”—a statement that may one day itself find its way into a chatbot’s memory. Give it a few years and much of the information out there—most of what people “see”—won’t come from witness accounts or result from a human looking at evidence and applying critical thinking. It will be a facsimile of what they reported, presented in a manner deemed appropriate.

Admittedly, this is drastic. As Beres noted, “generative AI could turn out to be fine,” but there is room for concern. On Thursday, WIRED published a massive report looking at how generative AI is being used in elections around the world. It highlighted everything from fake images of Donald Trump with Black voters to deepfake robocalls from President Biden. It’ll get updated throughout the year, and my guess is that it’ll be hard to keep up with all the misinformation that comes from AI generators. One image may have put eyes on Rafah, but it could just as easily put eyes on something false or misleading. AI can learn from humans, but it cannot, like Ut did, save people from the things they do to each other.

Loose Threads

Search is screwed. Like a stupid aughts Bond villain, The Algorithm has menaced internet users for years. You know what I’m talking about: The mysterious system that decides which X post, Instagram Reel, or TikTok you should see next. The prevalence of one such algorithm really got the spotlight this week, though: Google. After a few rough days during which the search giant’s “AI Overviews” got pummeled on social media for telling people to put glue on pizza and eat rocks (not at the same time), the company hustled to scrub the bad results. My colleague Lauren Goode has already written about the ways in which search—and the results it provides—as we know it is changing. But I’d like to proffer a different argument: Search is just kind of screwed. It seems like every query these days calls up a chatbot no one wants to talk to, and personally, I spent the better part of the week trying to find new ways to search that would pull up what I was actually looking for, rather than an Overview. Oh, then there was that whole matter of 2,500 search-related documents getting leaked.

TikTok content

This content can also be viewed on the site it originates from.

Maybe don’t invoke the “phone rule” at Chipotle. Look, everyone has their ideal order at their ideal place. Hot dog with extra ketchup. (Don’t try this in Chicago.) Filet-O-Fish, double the cheese. Hawaiian pizza, hold the glue. Same is true for Chipotle. The thing is, some people want more guac, some less. Rice ratios vary. Recently, though, food influencer Keith Lee posted a TikTok noting that the portions in his bowl were not up to snuff. He gave it a 2 out of 10. It started a minor uproar about the amount of ingredients the chain was putting in any given order.

Shortly thereafter, something called “phone rule” began trending. Essentially, the thinking goes, if you hold up your phone while Chipotle workers make your order, they’ll be inclined to give you more since they’re being recorded. Whether or not this works is up for debate, but here’s an idea: Let’s not. People who work in customer service have enough to deal with: the lowest legal wages, limited advancement opportunities, increased surveillance in the workplace from their bosses, let alone customers. Not to mention, they’ll probably top off your sour cream if you just treat them well or ask. Chipotle’s social media managers, at least, seem to have a sense of humor about the whole thing.

TikTok content

This content can also be viewed on the site it originates from.

#SwiftiesForPalestine. For several months Taylor Swift fans have been calling on the singer to publicly support Palestine, an effort that got a lot more mainstream attention this week as Swift’s Eras Tour hits Europe.

That movie is a work of art. Literally. Finally, a thread of movie shots and the famous works of art that inspired them. Enjoy.