Googling the disconnect
For tech companies, photography is in the rearview mirror
Less than two weeks ago Engadget reported what seemed to be major news in the pursuit of photographic credibility, at least to some: “Google unveiled its Pixel 10 lineup today, and the company’s latest phones will be the first to implement industry-standard C2PA Content Credentials within the native camera app. This enables people to identify whether an image was edited using AI, confirming its authenticity (or lack thereof) to anyone looking at it.”
The article, by Matt Tate, continues: “The Coalition for Content Provenance and Authenticity, or C2PA, designed an open technical standard that essentially enforces transparency on a piece of media, providing information on how it was created and what, if any, modifications have been made. Appearing as a digital watermark (the C2PA likens it to a nutrition label), Content Credentials will be present in all photos taken by a Pixel 10 camera, and that imprint will also be viewable by anyone using Google Photos.”
This news was followed by some jubilation in certain corners, including in human rights work, given the announced ability of these cameras to confirm the provenance of the image — was it modified after being taken or not?
Then this week the Washington Post headlined an article, “Masterful photo edits now take just a few words. Are we ready for this?”
And, believe it or not, it is the same company, Google, that just announced its pioneering implementation of “Content Credentials.”
The Post article begins: “Using artificial intelligence to create images out of whole cloth is nothing new. Using AI to strategically or even surgically manipulate genuine photos has always been trickier — until Google DeepMind leapfrogged the pack with a new tool.”
One can easily imagine all the diverse modifications the software can make to a photograph, many of them silly. But one concerning aspect is that there is apparently no obstacle to adding actual, identifiable people to the initial imagery. “Gemini didn’t complain,” the Post article continues, “when I had it add [the actor] Vin Diesel to photos of a friend throwing a Fast & Furious-themed birthday party, and it realistically added President Donald Trump to a photo of my very Republican mom without a fuss.” Unsurprisingly, given the significant differences in the goals of tech companies and journalistic outlets, “Google and DeepMind did not immediately respond to a request for comment.”
The supposed safeguard? “Google also says the images edited with its new Gemini model have special ‘SynthID’ watermark data embedded in them, which can be used to highlight specific AI manipulations.”
But there is a significant caveat. “The catch? The tool for detecting that telltale data, which Google announced in May, is not yet available to the general public.”
The review of the Pixel 10 camera in The Guardian published today is even less reassuring. It states that (and I am not sure if this is meant as a compliment): “Google is still the best at reliably producing good images without much thought.” But the article’s conclusion is considerably more unsettling: “Overall, the Pixel 10 shoots great photos and videos with generally helpful AI additions, particularly with Best Take and Add Me, but it is possible you end up with images of a moment that may never have actually happened.”
All of this reminds me of 2023 when Adobe, which spearheads the Content Authenticity Initiative, was reported to be selling stock AI images of Gaza and Ukraine, many of them photorealistic. But that was, it now seems, just the beginning of their embrace of AI-generated imagery. In May of this year, PetaPixel, citing Robert Kneschke’s blog, Daily Life of a Photo Producer, reported that earlier that month the number of AI images in Adobe Stock’s portfolio was almost 50 percent.
“Kneschke reports that in May 2023, there were 8.5 million AI images on Adobe Stock, representing 2.5 percent of the total portfolio at that time.
“But fast forward two years and that number has increased approximately 40 times with Kneschke records showing that as of April 2025, there were 313 million AI images available on Adobe Stock and 342 million actual photographs — 29 million images shy of being 50/50.” So much for photography.
Why won’t tech companies just provide cameras and software for journalists and documentarians, or anyone who wants to record the visible world for personal reasons, that do not modify the image in significant and unnecessary ways?
Providing add-on software, or “Content Credentials,” to track any changes in the photograph, even if the ledger could be accessed, invites viewers to be automatically skeptical of what is being depicted. They are made to think of the photograph as essentially malleable and requiring verification, thus diminishing its initial impact and its perceived credibility rather than augmenting it. In an era that increasingly celebrates the “unreal,” this is hardly an effective defense.
One might even consider celebrating the unaltered photograph, maybe putting a label on it akin to how organic foods are labeled (using C2PA’s nutritional metaphor): “This photograph has not been modified.”
Once again, it is not the photograph as a credible reference point that is the paramount goal, but profit margins. And, in the process, photography has been diminished, just another consumer product.
Finally, from the world of the unreal as reported by The Guardian this week: “‘I was very active over the weekend,’ Trump, 79, told reporters in the Oval Office on Tuesday. Asked about rumors on social media that he may have died, he called them ‘fake news.’”
Please subscribe and, if at all possible, become a paid subscriber. Your support is greatly appreciated and makes these posts possible.



I very much like the + sign you suggest.
"One might even consider celebrating the unaltered photograph, maybe putting a label on it akin to how organic foods are labeled (using C2PA’s nutritional metaphor): 'This photograph has not been modified.'”
This is an excellent conclusion to an excellent article, Fred, and a step in the right direction from your previous positions. In fact, it's close to mine, as outlined at https://take-note.com/note-takers/marshall--mayer/related-content/apple-protect-authentic-photography.
Pixel 10 barely registers any market share, so it alone will not stem the tide of popular confusion about what was recorded in front of a camera (or not) when viewed elsewhere on a screen. At least Google has proven that it is technically feasible on a phone.
Now all we need is other digital camera manufacturers (Apple foremost among them because of their market share) to embed a symbol on each image indicating that "This photograph has NOT been modified by generative AI." Perhaps add a + to a corner of the image that when clicked reveals its provenance as "real." We could all get used to that pretty quickly, instead of constantly being "skeptical of what is being depicted" as we are now. It's a critically needed response to the enshittification of public knowledge systems wrought by the tech fascists.