The line between real photographs and AI-generated imagery has blurred to the point where visual inspection alone is no longer reliable. In my recent ISSA presentation, Unmasking the Artificial: Forensic Defense Against Deepfake Imagery, I explored the scientific methods available for authenticating digital media. This post walks through several of those techniques with live demonstrations using real tools and sample images.
The Liars’ Dividend
Before diving into detection techniques, it is worth understanding why this matters beyond academic interest. The mere existence of deepfake technology gives bad actors a new defense: claiming that authentic evidence is fabricated. This phenomenon, known as the Liars’ Dividend, occurs when genuine digital evidence gets dismissed as AI-generated simply because the technology to create convincing fakes exists.
In criminal cases, defendants may challenge authentic photographic evidence by arguing it could have been generated by AI. In civil litigation, parties may dispute the authenticity of screenshots, photographs, or video recordings. The erosion of trust in digital media benefits those who wish to deny reality. As I discuss in Deepfake Defense: AI-Generated Evidence in Criminal Cases, this makes robust forensic authentication not just a technical exercise but a legal necessity.
The ability to prove authenticity is becoming as important as the ability to detect forgery. The detection methods demonstrated below serve both purposes: they can identify synthetic images, and they can provide the evidentiary foundation needed to authenticate genuine photographs against bad-faith challenges.
The Challenge: Can You Tell the Difference?
Consider these two images. One was captured by a Google Pixel 8 Pro smartphone. The other was generated by Google’s Imagen model via Nano Banana (a Gemini-powered image generator). Without metadata or provenance tools, the distinction is not obvious.
Forensic examiners cannot rely on gut instinct. We need repeatable, evidence-based methods. This post walks through three complementary approaches: EXIF metadata analysis for first-pass triage, C2PA content credentials for cryptographic provenance, and structured AI detection prompting for systematic visual assessment. Each method has distinct strengths and limitations, and as we will see, the detection prompt in particular must be understood as a black box whose conclusions are useful for triage but insufficient for legal proceedings.
Demo 1: EXIF Metadata Analysis with ExifTool
ExifTool is the standard utility for extracting metadata embedded in image files. Real cameras embed rich Exchangeable Image File Format (EXIF) data including make, model, lens information, GPS coordinates, exposure settings, and processing software. AI-generated images typically contain minimal or no EXIF metadata.
Real Camera Image: Google Pixel 8 Pro
Running exiftool against a photograph taken with a Google Pixel 8 Pro reveals extensive metadata:
$ exiftool PXL_20260104_180921964.jpg
Make : Google
Camera Model Name : Pixel 8 Pro
Software : HDR+ 1.0.839895461zd
Date/Time Original : 2026:01:04 13:09:21
Exposure Time : 1/492
F Number : 1.9
ISO : 54
Focal Length : 2.2 mm
Focal Length In 35mm Format : 12 mm
Exif Image Width : 4080
Exif Image Height : 3072
Sensing Method : One-chip color area
Scene Type : Directly photographed
Flash : Off, Did not fire
Metering Mode : Center-weighted average
Color Space : sRGB
This metadata tells a coherent story: a Google Pixel 8 Pro using its HDR+ computational photography pipeline, shooting at f/1.9 with ISO 54 in natural light. The 4080x3072 resolution matches the Pixel 8 Pro’s 12.5MP main sensor. The Scene Type: Directly photographed tag further confirms a physical capture event.
The abbreviated output above only scratches the surface. The full ExifTool dump reveals GPS coordinates, ICC color profile data, Google-specific makernotes, and multi-picture format entries. Note the GPS position placing this photograph at specific coordinates:
Full ExifTool output for PXL_20260104_180921964.jpg (click to expand)
ExifTool Version Number : 12.76
File Name : PXL_20260104_180921964.jpg
File Size : 3.3 MB
File Type : JPEG
File Type Extension : jpg
MIME Type : image/jpeg
Exif Byte Order : Little-endian (Intel, II)
Make : Google
Camera Model Name : Pixel 8 Pro
Orientation : Horizontal (normal)
X Resolution : 72
Y Resolution : 72
Resolution Unit : inches
Software : HDR+ 1.0.839895461zd
Modify Date : 2026:01:04 13:09:21
Y Cb Cr Positioning : Centered
Exposure Time : 1/492
F Number : 1.9
Exposure Program : Program AE
ISO : 54
Exif Version : 0232
Date/Time Original : 2026:01:04 13:09:21
Create Date : 2026:01:04 13:09:21
Offset Time : -05:00
Offset Time Original : -05:00
Offset Time Digitized : -05:00
Components Configuration : Y, Cb, Cr, -
Shutter Speed Value : 1/491
Aperture Value : 2.0
Brightness Value : 6.76
Exposure Compensation : 0
Max Aperture Value : 2.0
Subject Distance : 4294967295 m
Metering Mode : Center-weighted average
Flash : Off, Did not fire
Focal Length : 2.2 mm
Sub Sec Time : 964
Sub Sec Time Original : 964
Sub Sec Time Digitized : 964
Flashpix Version : 0100
Color Space : sRGB
Exif Image Width : 4080
Exif Image Height : 3072
Interoperability Index : R98 - DCF basic file (sRGB)
Interoperability Version : 0100
Sensing Method : One-chip color area
Scene Type : Directly photographed
Custom Rendered : Custom
Exposure Mode : Auto
White Balance : Auto
Digital Zoom Ratio : 0
Focal Length In 35mm Format : 12 mm
Scene Capture Type : Standard
Contrast : Normal
Saturation : Normal
Sharpness : Normal
Subject Distance Range : Distant
Lens Make : Google
Lens Model : Pixel 8 Pro back camera 2.23mm f/1.95
GPS Version ID : 2.2.0.0
GPS Latitude Ref : North
GPS Longitude Ref : West
GPS Altitude Ref : Above Sea Level
GPS Time Stamp : 18:08:56
GPS Img Direction Ref : Magnetic North
GPS Img Direction : 50
GPS Date Stamp : 2026:01:04
Compression : JPEG (old-style)
Thumbnail Offset : 1303
Thumbnail Length : 26410
Profile CMM Type :
Profile Version : 4.0.0
Profile Class : Display Device Profile
Color Space Data : RGB
Profile Connection Space : XYZ
Profile Date Time : 2023:03:09 10:57:00
Profile File Signature : acsp
Primary Platform : Unknown ()
CMM Flags : Not Embedded, Independent
Device Manufacturer : Google
Device Attributes : Reflective, Glossy, Positive, Color
Rendering Intent : Perceptual
Connection Space Illuminant : 0.9642 1 0.82491
Profile Creator : Google
Profile ID : 61473528d5aaa311e143dfc93efaa268
Profile Description : sRGB IEC61966-2.1
Profile Copyright : Copyright (c) 2023 Google Inc.
Media White Point : 0.9642 1 0.82491
Media Black Point : 0 0 0
Red Matrix Column : 0.43604 0.22249 0.01392
Green Matrix Column : 0.38512 0.7169 0.09706
Blue Matrix Column : 0.14305 0.06061 0.71391
Chromatic Adaptation : 1.04788 0.02292 -0.05019 0.02959 0.99048
-0.01704 -0.00922 0.01508 0.75168
XMP Toolkit : Adobe XMP Core 5.1.0-jc003
Version : 1.0
Has Extended XMP : 60A6F51B67476D40376707CB1BF3562C
MPF Version : 0100
Number Of Images : 2
MP Image Type : Undefined
Image Width : 4080
Image Height : 3072
Encoding Process : Baseline DCT, Huffman coding
Bits Per Sample : 8
Color Components : 3
Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2)
Aperture : 1.9
Image Size : 4080x3072
Megapixels : 12.5
Scale Factor To 35 mm Equivalent: 5.4
Shutter Speed : 1/492
Create Date : 2026:01:04 13:09:21.964-05:00
Date/Time Original : 2026:01:04 13:09:21.964-05:00
Modify Date : 2026:01:04 13:09:21.964-05:00
GPS Altitude : 194.4 m Above Sea Level
GPS Date/Time : 2026:01:04 18:08:56Z
GPS Latitude : 46 deg 34' 30.52" N
GPS Longitude : 85 deg 15' 20.71" W
Circle Of Confusion : 0.006 mm
Depth Of Field : inf (0.46 m - inf)
Field Of View : 112.6 deg
Focal Length : 2.2 mm (35 mm equivalent: 12.0 mm)
GPS Position : 46 deg 34' 30.52" N, 85 deg 15' 20.71" W
Hyperfocal Distance : 0.46 m
Light Value : 11.8
Lens ID : Pixel 8 Pro back camera 2.23mm f/1.95
AI-Generated Image: Nano Banana (Google Imagen)
Now compare the metadata from the AI-generated sports car image:
$ exiftool SportsCar.png
File Type : PNG
MIME Type : image/png
Image Width : 1024
Image Height : 1024
Bit Depth : 8
Color Type : RGB with Alpha
Compression : Deflate/Inflate
Claim Generator Info Name : Google C2PA Core Generator Library
Actions Action : c2pa.created, c2pa.edited
Actions Description : Created by Google Generative AI.,
Applied imperceptible SynthID watermark.
Actions Digital Source Type : trainedAlgorithmicMedia
There is no camera make, model, lens data, exposure settings, or GPS coordinates. Instead, the metadata contains C2PA (Coalition for Content Provenance and Authenticity) assertions declaring the image was “Created by Google Generative AI” with a digital source type of trainedAlgorithmicMedia.
Google also applied an imperceptible SynthID watermark. SynthID is a steganographic watermarking technology developed by Google DeepMind that embeds an invisible signal directly into the pixel data of the image during generation. Unlike EXIF metadata or C2PA manifests, which are stored in file headers or container structures that can be stripped, SynthID’s signal is woven into the image content itself. It survives format conversion, compression, cropping, and metadata removal. Even if someone runs exiftool -all= image.png to strip every metadata tag, the SynthID signal remains intact in the pixel values and can be detected by Google’s proprietary classifier. This makes SynthID a more durable provenance signal than any metadata-based approach, though its detection currently requires access to Google’s tools. For a broader discussion of how these detection techniques apply in practice, see Detecting AI-Generated Images.
GAN-Generated Image: No Metadata At All
Older generative models such as GANs (Generative Adversarial Networks) produce images with virtually no metadata:
$ exiftool thispersondoesnotexist.com.jpg
File Type : JPEG
MIME Type : image/jpeg
JFIF Version : 1.01
Resolution Unit : None
X Resolution : 1
Y Resolution : 1
Image Width : 1024
Image Height : 1024
Encoding Process : Progressive DCT, Huffman coding
This image from thispersondoesnotexist.com contains only basic JFIF container metadata. No camera information, no provenance data, no C2PA credentials. The complete absence of EXIF data is itself a forensic indicator, though not conclusive since metadata can be intentionally stripped from legitimate photographs.
The EXIF Transplant Problem
It is important to understand that EXIF metadata can be trivially manipulated. ExifTool itself can copy all metadata from one image to another. To demonstrate, we start by copying the Pixel 8 Pro photograph and confirming its identity:
$ cp PXL_20260104_180921964.jpg test1.jpg
$ exiftool test1.jpg | grep -iE "pixel|canon"
Camera Model Name : Pixel 8 Pro
Lens Model : Pixel 8 Pro back camera 2.23mm f/1.95
Megapixels : 12.5
Lens ID : Pixel 8 Pro back camera 2.23mm f/1.95
Now we transplant the EXIF metadata from a Canon EOS Rebel T100 photograph:
$ exiftool -TagsFromFile IMG_0014.JPG -All:All test1.jpg
1 image files updated
$ exiftool test1.jpg | grep -iE "pixel|canon"
Make : Canon
Camera Model Name : Canon EOS Rebel T100
Canon Flash Mode : Off
Canon Image Size : Large
Canon Exposure Mode : Easy
Lens Type : Canon EF-S 18-55mm f/3.5-5.6 III
Canon Image Type : Canon EOS Rebel T100
Canon Firmware Version : Firmware Version 1.0.1
Canon Model ID : EOS Rebel T100 / 4000D / 3000D
Canon Image Width : 5184
Canon Image Height : 3456
Lens ID : Canon EF-S 18-55mm f/3.5-5.6 III
Megapixels : 12.5
Every reference to “Pixel” has been replaced with “Canon.” The image pixels are unchanged, but the metadata now claims a completely different camera. This is why EXIF analysis is a useful first-pass triage tool but cannot serve as the sole basis for authentication in forensic or legal contexts. For a comprehensive overview of the layered approach required for reliable detection, see Identifying Synthetic Images: Advanced Methods for Spotting AI-Created Photos.
Demo 2: C2PA Content Credentials with c2patool
The C2PA standard (Coalition for Content Provenance and Authenticity) represents a fundamental shift from metadata-based identification to cryptographic provenance. Rather than relying on easily-forged EXIF tags, C2PA uses cryptographically signed manifests that chain assertions about an image’s origin and editing history.
Real Camera Image: No C2PA Claims
$ c2patool PXL_20260104_180921964.jpg
Error: No claim found
Most consumer cameras do not yet embed C2PA credentials. This will change as the standard gains adoption, but currently the absence of C2PA data is expected for legitimate photographs from most devices.
AI-Generated Image: Full C2PA Manifest
Running c2patool against the Nano Banana sports car image reveals a rich provenance chain:
$ c2patool SportsCar.png
{
"active_manifest": "urn:c2pa:73ef9ac9-6a23-1225-821e-4e728b2652ec",
"manifests": {
"urn:c2pa:dc51b723-dcf7-5537-65e8-43647e52976c": {
"claim_generator_info": [
{
"name": "Google C2PA Core Generator Library"
}
],
"assertions": [
{
"label": "c2pa.actions.v2",
"data": {
"actions": [
{
"action": "c2pa.created",
"digitalSourceType": "trainedAlgorithmicMedia",
"description": "Created by Google Generative AI."
},
{
"action": "c2pa.edited",
"digitalSourceType": "trainedAlgorithmicMedia",
"description": "Applied imperceptible SynthID watermark."
}
]
}
}
],
"signature_info": {
"alg": "Es256",
"issuer": "Google LLC",
"common_name": "Google Media Processing Services",
"time": "2026-02-25T17:37:23+00:00"
}
},
"urn:c2pa:73ef9ac9-6a23-1225-821e-4e728b2652ec": {
"assertions": [
{
"label": "c2pa.actions.v2",
"data": {
"actions": [
{
"action": "c2pa.opened"
},
{
"action": "c2pa.edited",
"digitalSourceType": "trainedAlgorithmicMedia",
"description": "Added imperceptible SynthID watermark"
},
{
"action": "c2pa.edited",
"digitalSourceType": "composite",
"description": "Added visible watermark"
},
{
"action": "c2pa.converted",
"description": "Converted to .png"
}
]
}
}
],
"signature_info": {
"issuer": "Google LLC",
"common_name": "Google Media Processing Services",
"time": "2026-02-25T17:37:24+00:00"
}
}
}
}
The C2PA manifest tells a detailed story:
- Creation: The inner manifest declares
c2pa.createdwith digital source typetrainedAlgorithmicMediaand description “Created by Google Generative AI.” - SynthID: An imperceptible watermark was applied using Google’s SynthID technology.
- Post-processing: The outer manifest shows subsequent edits including a visible watermark and format conversion to PNG.
- Cryptographic signing: Both manifests are signed by “Google Media Processing Services” using ECDSA (Es256), with timestamps.
When the C2PA signature is intact, this is powerful evidence of provenance. The cryptographic chain links the image to a specific generator, at a specific time, with a verifiable signature from a known issuer.
However, C2PA credentials are not indestructible. Tools like c2patool itself can remove C2PA manifests, and converting an image through certain editing pipelines or format conversions can strip the embedded JUMBF data. The C2PA standard is designed to be tamper-evident, not tamper-proof: if someone removes the credentials, the image simply loses its provenance chain. The absence of C2PA data proves nothing. C2PA is only trustworthy when the signature is both present and cryptographically valid. An image with a valid C2PA manifest from a trusted issuer provides strong provenance evidence. An image without C2PA data is simply unattested — it could be authentic, synthetic, or anything in between.
Hardware vs. Software Credentials
An important distinction exists between hardware-embedded credentials (from cameras with built-in C2PA support, such as the Leica M11-P or Sony Alpha 1) and software-based assertions (from platforms like Google, Adobe, or OpenAI). Hardware credentials carry stronger forensic weight because the signing key is bound to a physical device. Software assertions depend on the platform’s integrity and signing practices. Both are valuable, but they represent different levels of assurance.
Demo 3: AI Detection Prompting
Large language models with vision capabilities can analyze images for indicators of AI generation. However, this approach requires careful framing and an honest acknowledgment of its limitations.
The Detection Prompt
The following prompt structures a systematic forensic analysis across ten indicator categories. It can be used with any vision-capable LLM (such as GPT-4o, Claude, or Gemini) by attaching an image and submitting this text:
Analyze this image and determine whether it is AI-generated or captured by a real
camera. Examine the following forensic indicators and provide a detailed rationale
for your conclusion:
Texture and surface quality — Are textures organic and naturally varied, or do they
show signs of over-smoothing, repetition, or synthetic uniformity?
Lighting and shadows — Are light sources physically consistent? Do shadows fall
correctly relative to the apparent light direction?
Fine detail rendering — Examine edges, hair, fur, fabric, water, or other complex
surfaces. Do they show natural irregularity or synthetic perfection?
Frequency characteristics — Does the image appear to have the natural noise grain
of a camera sensor, or does it have the smooth, noiseless quality typical of
diffusion model output?
Contextual coherence — Do all elements in the scene make physical and spatial
sense? Are proportions, reflections, and perspective geometrically consistent?
Facial features — If faces are present, examine eyes, ears, teeth, and hairlines
for uncanny symmetry, impossible reflections, or subtle anatomical errors.
Text and labels — If any text appears in the image, is it legible, correctly
spelled, and contextually appropriate?
Background integrity — Does the background show natural complexity, or does it
appear artificially simple, repetitive, or incoherent at the edges?
Compression and artifact signature — Does the image show natural JPEG compression
patterns consistent with a camera, or does it show the smooth, artifact-free output
typical of a generative model?
Watermarking indicators — Examine the image for any visible AI disclosure markers
such as a Google Gemini four-pointed star icon, a "Generated with AI" label, or any
platform-specific disclosure badge in the corners or edges of the image.
Additionally, note whether any metadata or C2PA content credentials are present
asserting AI origin. Note that invisible watermarks such as Google SynthID or
Meta's frequency-domain embedding cannot be detected through visual inspection
alone and require proprietary detectors — state this limitation explicitly if
relevant.
Based on your analysis of these indicators, state your conclusion: Real photograph
or AI-generated, along with a confidence level (Low / Medium / High) and a summary
of the two or three most compelling indicators that drove your conclusion. If
watermarking indicators were present, note whether they were visible markers,
metadata assertions, or invisible signals beyond the scope of visual analysis.
The Black Box Disclaimer
Any AI-based image detection method must be treated as a black box for forensic purposes. Unlike EXIF analysis or C2PA validation, which produce deterministic and reproducible results, an AI model’s classification of an image as “real” or “synthetic” is probabilistic. The model cannot explain its reasoning in a way that satisfies Daubert standards for expert testimony. It may perform well on average but fail unpredictably on specific images.
AI detection prompting is useful for triage and preliminary assessment. It can flag images for deeper forensic analysis using validated scientific methods. But it should never serve as the sole basis for a legal conclusion about an image’s authenticity. The court-admissible methods remain camera ballistics (PRNU analysis), cryptographic provenance (C2PA), and traditional forensic examination of image internals.
The Prompt Engineering Arms Race
The image generation prompt used to create the AI motorcycle image in our sample set was specifically crafted to mimic real camera characteristics:
A motorcycle photographed with a Google Pixel 8 smartphone camera, main 50mm
equivalent lens, f/1.68 aperture, ISO 80, 1/1000s shutter speed. Google
Computational Photography processing with HDR+ active, characteristic Pixel
"punchy" color science with slightly boosted shadows and lifted blacks.
Natural outdoor daylight, realistic lens flare from sun at edge of frame.
Subtle computational sharpening on edges typical of Google's image processing
pipeline, slight over-sharpening on chrome and metal surfaces. Real Noise
skin-tone optimization visible on matte surfaces. The motorcycle is parked in
an urban environment, slight low angle, three-quarter rear angle. Background
subject separation from computational bokeh portrait mode, with slight
edge-blending artifacts where the bokeh mask meets the motorcycle frame — a
known Pixel portrait mode characteristic. Photorealistic, casual street
photography aesthetic, not studio lit. Shot handheld with imperceptible motion
stabilization. 12MP resolution rendering.
This illustrates the adversarial nature of the problem. Prompt engineering can simulate the visual characteristics of specific camera systems — the color science, the bokeh rendering, even the lens flare patterns. However, no prompt can defeat the forensic methods that operate below the pixel level. The SynthID watermark embedded during generation survives regardless of how camera-like the output appears. PRNU analysis will reveal the absence of a genuine sensor fingerprint. C2PA manifests, when present, will still declare the AI origin. The prompt engineering arms race is real at the visual inspection level, but the scientific detection methods discussed in this post currently remain effective against even the most sophisticated generation prompts.
Summary of Detection Methods
| Method | Strength | Limitation | Forensic Weight |
|---|---|---|---|
| EXIF Analysis | Fast triage; rich camera data when present | Metadata can be stripped or forged | Low — easily manipulated |
| C2PA Validation | Cryptographic provenance; tamper-evident | Requires platform adoption; absence is not proof of authenticity | High — when present |
| AI Detection Prompting | Accessible; covers visual indicators | Black box; not reproducible; adversarially vulnerable | Low — triage only |
| PRNU / Camera Ballistics | Device-specific fingerprint; court-admissible | Requires reference database; computationally intensive | High — scientific basis |
| Frequency Domain Analysis | Can identify GAN/diffusion artifacts | Evolving as generators improve | Medium — research stage |
Conclusion
No single detection method is sufficient. Effective forensic analysis of digital imagery requires layering multiple approaches: EXIF metadata for initial triage, C2PA credentials for cryptographic provenance when available, AI detection prompting for structured preliminary assessment, and scientific methods like PRNU analysis for court-admissible authentication.
Revisiting the AI detection prompt from Demo 3: while it provides a useful framework for systematic visual analysis, remember that it operates as a black box. In contrast, the EXIF analysis in Demo 1 and the C2PA validation in Demo 2 produce deterministic, reproducible results. The detection prompt is best used as a triage tool that identifies images warranting deeper forensic scrutiny — not as a final determination of authenticity.
The shift toward cryptographic provenance via C2PA represents the most promising long-term direction. As more cameras embed hardware-level content credentials and more platforms sign their outputs, the question will increasingly become not “is this image real?” but rather “does this image have a verifiable chain of custody?” Combined with durable watermarking technologies like SynthID that survive metadata stripping, the forensic toolkit for authenticating digital media is becoming increasingly robust — even as the generators themselves become more convincing.
Returning to the Liars’ Dividend: these detection methods serve a dual purpose. They identify synthetic images, and they provide the evidentiary foundation to defend authentic images against bad-faith challenges. When a forensic examiner can demonstrate through PRNU analysis that an image was captured by a specific physical device, or when a C2PA manifest cryptographically proves an image’s provenance chain, the Liars’ Dividend loses its power.
For a deeper exploration of these topics, including the principles of camera ballistics and model fingerprinting techniques, see the slides and notes from my ISSA presentation: Unmasking the Artificial: Forensic Defense Against Deepfake Imagery.

