“How do we figure out what’s real and what’s not real,” OpenAI’s leader, Sam Altman, was asked by Cleo Abram in a recent interview.
“I can give all sorts of literal answers to that question; we could be cryptographically signing stuff, we could decide who we trust, if they actually filmed something or not,” Altman said. “But my sense is what’s going to happen is it’s just going to gradually converge.”
In the interview, Altman, whose company owns ChatGPT used by some 800 million people, stated that media, which is “always like a little bit real and a little bit not real,” is now becoming increasingly unreal, such as is the case with computational photography. He argued that the images from an iPhone, while now accepted as depicting what exists, are significantly modified by algorithms within the camera, asserting that “the threshold for how real does it have to be to be considered real will just keep moving.” Or, as Altman further elaborated, “a higher percentage of media will feel not real, but I think that’s been the long-term trend anyway.”
The implication, then, is that the borderline between photographs and AI-generated images will diminish in this melding of reality and unreality. PetaPixel headlined its coverage of the interview more bluntly: “Sam Altman Says AI Images Are Just a Continuation of Photos.”
What is ignored is that even computational photographs start with whatever physically existed in front of the camera, while AI-generated images begin with prompts of what one wants to visualize, even if it never existed. While the computational photograph can be contradicted or confirmed by other photographs of the same scene or by eyewitnesses, the AI-generated image is synthetic, made up, with no claim to being a witness to a physical reality. These essential differences, while profound, are made to seem trivial.
Altman’s prognosis not only significantly undermines the function of documentary photography, but is an entryway to a collective hallucination. The child starving to death in Gaza or the family killed in Ukraine may or may not exist, much like Schrödinger’s cat, just as depictions of our grandparents teeter between the actual and the fabricated.
Even more troubling, this convergence of the real and unreal transcends imagery, describing many of the current policies of the US government, where the reality of hard-working, law-abiding immigrants is transformed into an image of them as vicious criminals by a president who has himself been convicted of 34 felonies.
Similarly, the Washington Post reported, “When President Donald Trump declared his third presidential candidacy in 2022, he saved his most colorful language for America’s urban areas, bemoaning ‘the blood-soaked streets of our once-great cities’ and adding that ‘the cities are rotting, and they are indeed cesspools of blood.’” Then, “Later in his campaign, Trump called Milwaukee ‘horrible’ and described Washington, D.C., as a ‘rat-infested, graffiti-infested shithole.’ More recently he said, ‘These cities, it’s like living in hell.’”
Now, Republican state governors are sending in National Guard troops to add to those already there to control the nation’s capitol, while terrifying and harassing many of its residents who, at the same time, have been benefiting from a 30-year low in violent crime.
Altman is not alone in predicting or even rooting for photography’s demise as a witnessing medium. As I had cited previously, in an interview in Wired magazine, Isaac Reynolds, the group project manager for Google’s Pixel Camera, suggested that today’s photographer should be able to override the evidence of the photograph in pursuit of a representation “that’s authentic to your memory and to the greater context, but maybe isn’t authentic to a particular millisecond.”
And, very much in line with what Altman stated, Patrick Chomet, Samsung’s head of customer experience, argued in an interview with TechRadar that “actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture.… You can try to define a real picture by saying, ‘I took that picture,’ but if you used AI to optimize the zoom, the autofocus, the scene—is it real? Or is it all filters? There is no real picture, full stop.”
Where does this leave us? As I wrote recently in Vanity Fair: “A new study conducted in the United Kingdom found that most teenagers, if given the Solomonic choice, would prefer to have actual nude pictures of themselves published online rather than deepfakes. How so? The unreality of the deepfake has become more frightening than the reality of the nude. As reported by Digit news, ‘Over half of teenagers now think it would be worse to have a deepfake nude created and shared of them than a real image, according to research from Internet Matters, the UK non-profit for children’s online safety.’ According to the report, teenagers ‘see nude deepfake abuse as worse than sexual abuse featuring real images.’ Their reasons include ‘a lack of autonomy over or awareness of the image, anonymity of the perpetrator, how the image might be manipulated, and fears of people thinking the image is real.’ The fake is now considered to be potentially even more traumatizing than the real.”
Previously, the photograph was able to act as a reference point that could anchor our sense of the real; now that hardly matters.
Our organizations — journalistic, documentary, educational, legal, governmental, humanitarian, etc. — must act urgently, working together to preserve that anchor. Why are there not frequent conferences now being organized to discuss how to collaborate to assert a shared sense of the real?
If not now, then when?
Case in point, concerning the image displayed above, from a March 4, 2024, article in The Independent entitled “Trump supporters ‘share AI-generated images’ of ex-president with Black voters”:
“One of the creators, Mark Kaye, shared an image on Facebook. He told the BBC: ‘I’m not a photojournalist. I’m not out there taking pictures of what’s really happening. I’m a storyteller.’
“He posted an article about black voters supporting Mr Trump alongside the picture.
“Mr Kaye told the BBC: ‘I’m not claiming it is accurate. I’m not saying, ‘Hey, look, Donald Trump was at this party with all of these African American voters. Look how much they love him!’
“‘If anybody’s voting one way or another because of one photo they see on a Facebook page, that’s a problem with that person, not with the post itself.’”
Enough said? (It’s not a photo.)
Please become a paid subscriber. It would allow me to post more frequently.
Rarely has there been a more tragic confluence of events...
I shudder. I really do.
Eeeeeeeek!!!!