In March of 2004, just over twenty years ago, I published an ongoing piece entitled, like this one, “Photointegrity”. The issue remains the same, but the rise of AI increases its importance and its difficulty. Here are words on the subject, illustrated by photos all of which have been processed with AI technology.

Pink-orange tulip blossom, folded closed

Tulip blossom, captured with twenty-year old analog technology, enhanced with AI.

There’s an amusing story about the technology behind these flower pictures, down at the bottom of this piece.

Back in 2004 · I was already using Photoshop but in fully-autodidactic mode, so I thought I should read a book about it, and selected one by Scott Kelby, “The Photoshop guy” back then and still active in the space, two decades later. It was a good book, but I was left wide-eyed and shocked: I’ll quote from that piece for those of you who don’t want to step back twenty years in time and read it:

Personal Improvement · In particular, Kelby walks through an astounding list of techniques for improving portraits, and I quote: removing blemishes, removing dark circles under the eyes, lessening freckles or facial acne, removing or lessening wrinkles, colorizing hair, whitening the eyes, making eyes that sparkle, enhancing eyebrows and eyelashes, glamour skin softening, transforming a frown into a smile, doing a digital nose job, slimming and trimming, removing love handles, and finally slimming buttocks, arms and thighs.

Integrity? · Screw it, integrity is history. The image is no longer the capture of a instant’s light and colour, it’s, well… whatever you and Photoshop make of it.

Photointegrity · I proposed a definition at the time: “what I’m going to do is strive to balance Truth and Beauty. In practical terms, this means the goal is make the picture look as much as possible like what I saw, as opposed to as good as possible.”

Simple yellow flower, two buds peeking round its edges

Simple yellow flower, captured with twenty-year old analog technology, enhanced with AI.

I can’t claim that I follow that strictly; most of the pictures in this space come out of the camera looking less pleasing than what I remember seeing, but I will confess that the version you see is often prettier than that memory. Usually, that results from the application of a bunch of Adobe technologies.

Is that OK? It’s a judgment call. Is there anything that isn’t a judgment call? Funny you should ask, because Adobe just announced the Firefly Generative AI 3 model, around which the next version of Photoshop is being built. Hitting those links and just scrolling through the pictures will give you a feeling for what this software will do.

Let me put a stake in the ground. I believe these things:

  1. If you use generative tools to produce or modify your images, you have abandoned photointegrity.

  2. That’s not always wrong. Sometimes you need an image of a space battle or a Triceratops family or whatever.

  3. What is always wrong is using this stuff without disclosing it.

The C2PA angle · Last October, I wrote up C2PA, a useful digital watermarking technology that can be used to label images and video. That piece’s predictions look like they’re coming true; several manufacturers have announced C2PA support. I’m not going to take the space here to describe C2PA again.

I do note that Photoshop already supports C2PA and when it writes a watermark saying “Edited with Photoshop”, that label includes a very few words about what it did: cropping, levels adjustment, and so on; no details.

I believe strongly that when people use Adobe’s Firefly generative AI to create or augment pictures, Photoshop should by default turn C2PA labeling on, and disclose in the watermark whether it is fully-generated or just augmented. Sure, the person generating the image can always take that watermark out, but they can’t change its contents, and assuming C2PA becomes ubiquitous, the absence of a watermark would be reasonable grounds for suspicion.

Cluster of pink fruit-tree blossoms, just opening

Fruit tree blossoms, not open yet, captured with twenty-year old analog technology, enhanced with AI.

AI + photointegrity? · Over the last couple of years, the way I use Adobe Lightroom has changed a whole lot, and it’s mostly because of AI. Specifically, smart select. Lightroom now offers Select functions for Subject, Background, Sky, and Object. There’s also a very useful “Duplicate and invert” for any selection. I use these for almost every photo I take, especially Select Sky. The amount of light in the sky differs from that down here on the surface, and I’m pretty sure that our eyes compensate for that. Almost every picture looks more “real” when you select the sky and dial the brightness down (rarely: up) a touch, and maybe bump the contrast a bit.

This photo would have been a complete failure without those tools.

Allyson’s parents at her memorial

Allyson’s parents speak to the crowd at her memorial.

Recently we were at a memorial social for our recently-deceased friend Allyson. It was on a rooftop, on a bright grey day; the volume of light coming out of the sky was insane, and kept turning my photographic subjects into dark silhouettes.

The photo of Ally’s parents addressing the crowd is not great (her mom’s eyes are closed) but it at least captures a moment. The original was totally unusable, because the subjects are under a canopy and thus shaded, while the sky and cityscape and even mountains were reflecting harshly. So you select the subject, you invert and duplicate, you add light to the subject and subtract from the rest, and you get something that looks exactly like what I saw.

Of course, this depends on a good camera with a lot of dynamic range that can fish detail out of shadows.

I think this process retains photointegrity.

AI-enhanced analog · What happened was, the sun came out after the rain, everything is blooming this time of year, and I wanted to take pictures. I was rummaging for lenses and there was this dark shape at the back of the shelf. “What’s that?” I thought. It turned out to be an old Pentax with “Macro” in its name. Just the thing! Here’s what the combo looks like.

Pentax 100mm macro lens strapped on Fujifilm X-T30
· · ·
Pentax 100mm macro lens strapped on Fujifilm X-T30

By the way, one reason the Internet is still good is that random enthusiasts maintain obscure databases, for example of camera lenses, from whence this smc Pentax-D FA 100mm F/2.8 Macro, an alternate version of which rejoices in the name “Schneider-Kreuznach D-Xenon”. It seems to have been manufactured only around 2004. I wrote about buying it in 2011 and shooting flowers and dancers with it in 2014; lotsa groovy pix in both.

Anyhow, this lens does a fabulous job of isolating foreground and background. Given this to chew on, Lightroom’s AI gizmo does a fabulous job of selecting just the flower (or background). So it’s easy to sharpen the flower and fade the bokeh; the old lens and the bleeding-edge software were made for each other.

But I digress.

Photointegrity matters · It mattered in 2004 and it matters more every passing year as our level of trust in online discourse falls and the power of generative AI grows. We have the tools to help address this, but we need to think seriously, and use them when appropriate.



Contributions

Comment feed for ongoing:Comments feed

From: Tim Walters (Apr 29 2024, at 22:26)

The only time so far I've used Photoshop's generative fill was in the service of photo integrity, at least in my mind. This is the picture:

https://flickr.com/photos/twalters/52960435323/in/photolist-2oFW5iR-2oPSTum

Everything more than a few pixels above the night heron's head was generative fill. To me it represents the event better than a picture where the poor bird is about to bang his head against the top of the frame. An edge case, and a matter of opinion, but it was interesting to me that I ran into a situation where I considered it valid for my purposes. I didn't expect that.

[link]

From: Nathan (Apr 30 2024, at 12:16)

I looked back at the older piece you referenced, and I read this piece with I think a reasonable amount of attentiveness. What I do not see is an explanation as to why photointegrity matters.

I will grant that it can matter if a photo appears in a newspaper which purportedly depicts an event of interest to the readership. Even this is a bit of a slippery slope argument: I don't actually care if the photo of some human who won an award is touched up to make them look younger and more vibrant than they have ever looked in person. If the article is about a plane crash and I later discover that the lurid photo front and center was created from whole cloth by a journo typing "fiery plane crash" into Adobe Firefly then I'm going to feel lied to, but the thrust of your post seems less about that and more about doing touch-up work.

Maybe I'm just a product of my generation, used to the media manufacturing their own truth and inherently skeptical of anything up to and including information I have seen with my own eyes in person, but I just can't seem to muster much indignation over AI-assisted photo enhancement. Perhaps you could convince me I'm wrong.

[link]

From: Tony Wylie (May 01 2024, at 02:37)

Have you ever read The Burden of Representation by John Tagg? If not, I highly recommend it. It’s quite a dense read but it deals with the concept of photographic integrity very well, and it is also fascinating on how photography has been manipulated in many ways right from the start of the medium.

Here’s a quote from the book, one of my favourite quotes ever:

“we have to see that every photograph is the result of specific and, in every sense, significant distortions which render its relation to any prior reality deeply problematic and raise the question of the determining level of the material apparatus and of the social practices within which photography takes place.”

Tony

[link]

From: Eric Scouten (May 01 2024, at 09:44)

> I believe strongly that when people use Adobe’s Firefly generative AI to create or augment pictures, Photoshop should by default turn C2PA labeling on, and disclose in the watermark whether it is fully-generated or just augmented.

Fundamentally, we agree. While there are some code paths that haven't implemented that yet, that is our North Star and we are increasingly moving in that direction.

[link]

From: Brian Slesinsky (May 04 2024, at 13:19)

I largely agree, but with a minor caveat that there are times when a fiction is obviously fiction and it doesn't need a disclaimer.

Still, whenever it might be confusing, it's better to lean towards clarity and transparency. (It might not be so obvious to others.)

[link]

From: Ed (May 14 2024, at 17:32)

Some of what Photoshop does isn't all that different from burning and dodging, but I suppose makes it much easier and exact.

FWIW, I currently walk about with a refurbished Polaroid SX-70. You have to be selective, then accept what you get (at a steep price)...and then there's one and only one image. I like that

[link]

author · Dad
colophon · rights

April 29, 2024
· Arts (11 fragments)
· · Photos (983 more)
· Technology (90 fragments)
· · ML-AI (3 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!