Leica, the German maker of elegant but absurdly-expensive cameras, just released the M11-P. The most interesting thing about it is a capability whose marketing name is “Content Credentials”, based on a tech standard called C2PA (Coalition for Content Provenance and Authenticity), a project of the Content Authenticity Initiative. The camera puts a digital watermark on its pictures, which might turn out to be extremely valuable in this era of disinformation and sketchy AI. Herewith a few words about the camera (Leicas are interesting) but mostly I want to describe what C2PA does and why I think it will work and how it will feel in practice.

M11-P · To start with, this thing lists at over $9,000 in the US. There are lots of awesome lenses for it, and you might be able to find one for under $5,000, but not usually.

On the other hand, it’s a lovely little thing.

Leica M11-P
· · ·
Leica M11-P, top view

The obvious question: Can it possibly be worth that much money? Well… maybe. Any camera on sale today, including your phone, can reliably take brilliant pictures. But people who use Leicas (I never have) rave about the ergonomics, so you might be a little quicker on the draw. And they say it’s fun to use, which means you’re more likely to have it with you when the great photo op happens. And there’s no denying it looks drop-dead cool.

C2PA is puzzling · I’ve been impressed by the whole C2PA idea ever since I first heard about it, but damn is it hard to explain. Every time I post about it, I get annoying replies like “I don’t want my camera to track me!” and “This is just NFTs again!” and “It’ll be easy to fake by tampering with the camera!” All of which are wrong. I conclude that I’m failing to explain clearly enough.

Whatever, let’s try again.

Signing · Inside the M11-P there is special hardware, and inside that hardware are two closely-linked little blobs of binary data called the “public key” and the ”private key”; we call this a “keypair”. The hardware tries to be “tamper-proof”, making it very hard for anyone to steal the private key. (But nothing is perfect; a real security expert would assume that a serious well-resourced hacker could crack and steal. More below.)

When you take a picture, the camera makes a little data package called a “manifest”, which records a bunch of useful stuff like the time, the camera serial number, the name of the person who owns the camera, and so on. Then it runs a bunch of math over the private key and manifest data and the image pixels to produce a little binary blob called the “signature”; the process is called “signing”. The manifest and the signature are stored inside the metadata (called “EXIF”) that every digital photo has.

Then, you share the public key with the world. Email it to your colleagues. Publish it on your website. Whatever. And anyone who gets your picture can run a bunch of math over the public key and manifest and pixels, and verify that those pixels and that manifest were in fact signed by the private key corresponding to the public key the photographer shared.

Geeky interlude for PKI nerds · (If the “PKI” acronym is new to you, do please skip forward to the “Chaining” section.)

Leica has posted a Content Credentials demo page with a sample image. Big thanks to Sam Edwards (@samedwards@mastodon.social), who dug around and found the actual JPG, then taught me about c2patool. All this happened in a nice chatty set of Mastodon threads starting here; the Fediverse is really the place for substantive conversation these days.

Leica’s C2PA test image

The actual image.
(That is, once you click to enlarge it. But watch out, it’s 21M).

Applying c2pa to the JPG yields the JSON manifest, which has a selection of useful EXIF fields. It turns out the signing relies on traditional PKI-wrapped certs; there’s one associated uniquely with this camera, with a proper signing chain through a Leica cert, all apparently rooted at D-Trust, part of Germany’s Bundesdruckerei which also prints money. All very conventional, and whatever programming language you’re using has libraries to parse and verify. Sadly, ASN.1 will never die.

By the way, the actual C2PA spec feels way more complicated than it needed to be, with Verifiable Credentials and Algorithm Agility and JSON-LD and CBOR and COSE etc etc. I haven’t had the bandwidth to slog all the way through. But… seems to work?

Chaining · I’ve described how we can attach a signature to a photo and anyone who has the camera’s public key can check whether it was signed by that camera. That’s clever, but not very useful, because before that picture gets in front of human eyes, it’s probably going to be edited and resized and otherwise processed.

That’s OK because of a trick called “signature chaining”. Before I explain this, you might want to drop by Leica’s Content Credentials page and watch the little video demo, which isn’t bad at all.

Now, suppose you change the photo in Photoshop and save it. It turns out that Photoshop already has (a beta version of) C2PA built in , and your copy on your own computer has its own private/public keypair. So, first of all, it can verify the incoming photo’s signature. Then when you save the edited version, Photoshop keeps the old C2PA manifest but also adds another, and uses its own private key to sign a combination of the new manifest, the old manifest (and its signature), and the output pixels.

There’s enough information in there that if you have the public keys of my camera and my copy of Photoshop, you can verify that this was a photo from my camera that was processed with my Photoshop installation, and nobody else got in there to make any changes. Remember, “signature chaining”; it’s magic.

If you run a news site, you probably have a content management system that processes the pictures you run before they hit your website. And that system could have its own keypair and know how to C2PA. The eventual effect is that on the website, you could have a button labeled “Check provenance” or some such, click it and it’d do all the verification and show you the journey the picture took from camera to photo-editor to content-management system.

Why? · Because we are surrounded by a rising tide of disinformation and fakery and AI-powered fantasy. It matters knowing who took the picture and who edited and how it got from some actual real camera somewhere to your eyes.

(By the way, AI software could do C2PA too; DALL-E could sign its output if you needed to prove you were a clever prompter who’d generated a great fantasy pic.)

But this can’t possibly work! · Like I said above, every time I’ve posted something nice about C2PA, there’ve been howls of protest claiming that this is misguided or damaging or just can’t work. OK, let’s run through those objections one by one.

No, because I want to protect my privacy! · A perfectly reasonable objection; some of the most important reportage comes from people who can’t afford to reveal their identity because their work angers powerful and dangerous people. So: It is dead easy to strip the metadata, including the C2PA stuff, from any media file. In the movie linked from the Leica web site above, you’ll notice that he has to explicitly turn C2PA on in both the camera and in Photoshop.

Yes, this means that C2PA is useless against people who steal your photos and re-use them without crediting or paying you.

I can’t see any reason why I’d attach C2PA to the flowers and fripperies I publish on my blog. Well, except to demonstrate that it’s possible.

No, because I’m not a cryptography expert! · Fair enough, but neither am I. This demo page shows how it’ll work, in practice. Well, early-stage, it’s kind of rough-edged and geeky. Eventually there’ll be a nicely-styled “verify” button you click on.

No, because corporate lock-in! · Once again, reasonable to worry about, and I personally do, a lot. Fortunately, C2PA looks like a truly open standard with no proprietary lock-ins. And the front page of the Content Authenticity Initiative is very reassuring, with actual working code in JavaScript and Rust. I’m particularly pleased about the Rust SDK, because that can be wired into software built in C or C++, which is, well, almost everything, directly or indirectly.

For example, the Leica-provided image you see above has no C2PA data, because it’s been resized to fit into the browser page. (Click on it to get the original, which retains the C2PA.) The resizing is done with an open-source package called ImageMagick, which doesn’t currently do C2PA but could and I’m pretty sure eventually will. After which, the picture above could have a link in the signature chain saying “resized by ImageMagick installed on Tim Bray’s computer.”

No, because of the “analog hole”, I’ll just take a picture of the picture! · This doesn’t work, because the signing computation looks at every pixel, and you’ll never get a pixel-perfect copy that way.

No, because bad guys will sign fake images! · Absolutely they will, no question about it. C2PA tells you who took the picture, it doesn’t tell you whether they’re trustworthy or not. Trust is earned and easily lost. C2PA will be helpful in showing who has and hasn’t earned it.

No, because it will lead to copyright abuse! · It is definitely sane to worry about over-aggressive copyright police. But C2PA won’t help those banditos. Sure, they can slap a C2PA manifest, including copyright claims, on any old image, but that doesn’t change the legal landscape in the slightest. And, like I said, anyone can always remove that metadata from the image file.

No, because artists will be forced to buy in! · Yep, this could be a problem. I can see publishers falling overly in love with C2PA and requiring it on all submissions. Well, if you’re a film photographer or painter, there’s not going to be any embedded C2PA metadata.

The right solution is for publishers to be sensible. But also, if at any point you digitize your creations, that’s an occasion to insert the provenance data. We’ll need a tool that’s easy to use for nontechnical people.

No, because it’s blockchain! Maybe even NFTs! · It’s not, but you can see how this comes up, because blockchain also uses signature chains, there’s nothing in principle wrong with them. But C2PA doesn’t need any of the zero-trust collective-update crap that makes anything with a blockchain so slow and expensive.

No, because hackers will steal the private key and sign disinformation! · Definitely possible; I mentioned this above. When it comes to computer security, nothing is perfect. All you can ever do is make life more difficult and expensive for the bad guys; eventually, the attack becomes uneconomic. To steal the private key they’d have to figure out how to take the camera apart, get at the C2PA hardware, and break through its built-in tamper-proofing. Which I’m sure that a sufficiently well-funded national intelligence agency can do, or a sufficiently nerdy gang of Bolivian narcos.

But, first of all, it wouldn’t be easy, and it probably wouldn’t be terribly fast, and they’d have to steal the camera, hack it, put it back together, and get it back to you without you noticing. Do-able, but neither easy nor cheap. Now, if you’re a Hamas photoblogger, the Mossad might be willing and capable. But in the real world, when it really matters, the attackers are more likely to use the XKCD technique.

No, because websites don’t care, they’ll run any old gory clickbait pic! · Absolutely. C2PA is only for people who actually care about authenticity. I suspect it’s not gonna be a winner at Gab or Truth Social. I hope I’m not crazy in thinking that there are publishing operations who do care about authenticity and provenance.

OK then. How will it be used in practice? · I remain pretty convinced that C2PA can actually provide the provenance-chain capability that it claims to. [Note that the C2PA white papers claim it to be useful for lots of other things that I don’t care about (thanks to vince for pointing that out) and this piece is already too long, so I’ll ignore them.] Which raises the question: Where and how will it be used? I can think of two scenarios: High-quality publishing and social media.

The Quality Publishing workflow · We’re looking at The Economist or New Yorker or some such, where they already employ fact checkers and are aggressive about truth and trust. Their photos mostly come from indies they work with regularly, or big photo agencies.

Let’s look at the indie photographer first. Suppose Nadia has been selling pix to the pub for years, now they want to do C2PA and Nadia has a camera that can. So they tell Nadia to send them a picture of anything with C2PA enabled. They have a little database (Microsoft Access would be just fine) and a little app that does two things. First, when they get the sample photo from Nadia, there’s a button that reads the photo, extracts and verifies the C2PA, and writes an entry in the database containing Nadia’s camera’s public key and the way she likes to be credited.

From then on, whenever they get a pic from Nadia, they feed it to the app and press the other button, which extracts the C2PA, looks up the public key in the database, and checks the signature. If it something doesn’t match, there’s a problem and they probably shouldn’t run that picture without checking things out. If everything’s OK, it’ll create a nice little chunk of HTML with the credit to Nadia and a link to the HTML-ized provenance chain to show to anyone who clicks the “provenance” button beside the picture.

You know, if I were building this I’d make sure the database record included the email address, then I’d set the app up so the photog just emails the picture to the app, then the app can use the pubkey to pull the record and see if the email sender matches the database.

In the case of the agency photographers, the agency could run the database and app  on its website and the publisher could just use it. Neither option sounds terribly difficult or expensive to me.

The idea is that displaying the provenance button emphasizes the seriousness of the publisher and makes publishers who aren’t using one look sketchy.

The social-media workflow · The thinking so far seems to have been aimed at the high-end market I just discussed; after all, the first camera to implement C2PA is one of the world’s most expensive. I understand that Nikon has a camera in the pipeline and I bet it’s not going to be cheap either. [Sad footnote: I gather that Sony is building this into its cameras too but, being Sony, it’s not using C2PA but some Sony-proprietary alternative. Sigh.]

But on reflection I’m starting to think that C2PA is a better fit for social media. In that domain, the photos are overwhelmingly taken on mobile-phone cameras, and every app, bar none, has a media-upload feature.

Speaking as a former Android insider, I think it’d be pretty easy to add C2PA to the official Camera app or, failing that, to a C2PA-savvy alternate camera app.

I also think it’d be pretty easy for the Instagrams and TikToks of this world to add C2PA processing to their media-upload services. Obviously this would have to be explicitly opt-in, and it’d probably work about the same way as the Quality Publishing workflow. You have to initially upload something with a C2PA manifest to get your public key registered and tied to your social-media identity. Then you’d have to decide whether you wanted to attach C2PA to any particular picture or film-clip you uploaded.

I dunno, on a snakepit of sketchy information like for example Reddit I think there’d be real value, if I happened to get a good picture of a cop brutalizing a protester or a legislator smooching the Wrong Person or Ukrainian troops entering a captured town, to C2PA-equip that image. Then you could be confident that the trustworthiness of the image is identical to the trustworthiness of the account.

And if some particularly-red hot video capture either didn’t have the “provenance” badge, or it did but was from Igor48295y2 whose account was created yesterday, well then… I’m not so optimistic to think it’d be dismissed, but it’d be less likely to leak into mainstream media. And — maybe more important — if it were super newsworthy and CP2A-attributable to someone with a good record of trust, it might get on national TV right away without having to wait for the fact-checkers to track down the photog and look for confirming evidence.

Are you done yet, Tim? · Sorry, this one got a little out of control. But over the decades I’ve developed a certain amount of trust in my technology instincts. C2PA smells to me like a good thing that could potentially improve the quality of our society’s public conversation with itself.

And I’m in favor of anything that helps distinguish truth from lies.



Contributions

Comment feed for ongoing:Comments feed

From: Nathan (Oct 30 2023, at 06:07)

I don't much understand the analog hole argument. I think you made it pretty clear that C2PA is neither a copy-prevention mechanism or a way to conclusively ensure the persistence of attribution across copies. I assume people making the analog hole argument are fundamentally misunderstanding the technology (or I am).

In all the words, I also think it's possible that the main point about what C2PA is -- and (at least as important) what it is not -- keeps getting lost. It is not a way to ensure that a photo is properly attributed -- I can think of two trivial ways I could provide an unimpeachable C2PA certificate pointing to myself for a photo someone else took.

This second paragraph is why I think C2PA is probably not as much of a slam-dunk for social media as it is for responsible publishing. If you have already decided that a photo taken by a rando and posted to X is an acceptable source, C2PA is unlikely to help you. There are enough bad actors and sufficient incentive to ensure that the many easy ways to spoof this sort of thing will all be employed, and C2PA may very well give ignorant viewers (and ignorant media outlets) a false sense of legitimacy where none exists.

As I see it, the ONLY valid use case is a responsible publisher validating that a photo submission is truly from a trusted source. The trust must be a pre-existing condition, and humans being humans can be broken at any time.

In other words, you've convinced me that C2PA can work, but not that it's going to solve any real problems in society.

[link]

From: Larry O’Brien (Oct 30 2023, at 09:57)

What about a "weakest link" argument? As the number of software systems in the chain increases, the provenance is subject to the one with the least (or, perhaps, the most flexible[1]) security. I’m thinking of two vectors: one in which a black hat attacks and gains direct control of journalists’ photo management software (Lightroom or Darktable) because surely (?) don’t send RAWs straight from their SD cards. And a second vector in which larger disinformation actors normalize unnecessarily long chains, making it impractically burdensome to validate the certificates all along the way.

This isn’t an argument that it’s not an advance or that certs don’t work. It’s just an observation of a possible attack surface.

[1] I’m thinking that an OSS software such as Darktable will probably allow multiple upstream cert providers and one can easily imagine a sufficiently powerful actor creating “a signing authority” that allows for targeted downstream manipulation.

[link]

From: Karl Voit (Nov 01 2023, at 02:17)

C2PA and its goals are great.

However, what purpose does it have when Google adds yet another feature to their camera app to fake image content?

This would only makes sense when Google would make sure to include the original image (or its closest thing) as well as the on-device changes so that everybody is able to see what was changed on the phone before C2PA was applied.

[link]

From: Tristan Louis (Nov 03 2023, at 08:37)

First of all, great article.

It seems to me there is indeed a variety of places where this could be useful. A question I would have is whether it's all-in or all-out. Could I demonstrate that this was taken by a camera (as opposed to created by AI) WITHOUT having to put my name to it.

The use case I'm thinking about is people who are reporting on horrors and abuses but who may need, for some reason, to protect their own identity (eg. the kind of folks who report back to WITNESS). Author-ownership is not as essential there but veracity (ie. this was taken by a real camera and not tampered) appears valuable.

Last but not least, on the authorship model, this could present some interesting component in the world of Gen-AI. If you can prove (and chain) source material, then the author can decide whether they are OK with their work being remixed or used by AI in other form or not. C2PA would allow to create the right boundaries around core content ownership.

[link]

From: Sean C (Dec 16 2023, at 15:50)

It’s a little bit weird that the test image of a German camera manufacturer ends up on a Canadian blogger’s site that I happen to read and I recognize the picture was taken in Australia (the base of the Goodwill Bridge in Brisbane).

[link]

From: dm (Feb 13 2024, at 13:34)

Nathan:

I think the point is that if I want C2PA-signed disinfo, can just

- Use DALL-E (or whatever) to generate my image of the pope wearing a funny hat

- Print it out on a nice inkjet

- Photograph that printout with an expensive Leica

I think the counterargument is that you'll be able to tell by, like, depth of field and stuff that the image is of a 2D printout? Tim, correct me if I misunderstand.

The problem I have with the counterargument is that, while true, it seems to devolve to *existing* anti-fake-image arguments: "I can tell from the pixels." Which is the whole thing C2PA is meant to avoid, no?

[link]

author · Dad
colophon · rights

October 28, 2023
· Technology (90 fragments)
· · Identity (44 more)
· Arts (11 fragments)
· · Photos (984 fragments)
· · · Cameras (76 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!