SynthID and digital forensics – how to detect AI-generated images
In our previous article on deepfakes, we described the growing problem of forging digital materials using artificial intelligence. Today we focus on the next move in this arms race – invisible watermarks designed to help identify AI-generated content. The SynthID technology developed by Google DeepMind is currently the most important tool of its kind worldwide. However, recent research shows that even it is not infallible.
What is SynthID
SynthID is an invisible digital watermarking technology developed by Google DeepMind. It works by embedding imperceptible markers into AI-generated content – images, videos, audio recordings and text. The watermark is applied at the moment of creation and remains detectable even after typical modifications such as cropping, compression, resolution changes or filter application.
For images, SynthID operates in the frequency domain – embedding a carrier signal at fixed frequencies with specific phase values. Crucially, this pattern is consistent across all images generated by a given model, enabling source identification even without access to the original file.
Why AI watermarks matter for forensics
For detectives and forensic experts, technologies like SynthID represent a new verification tool. When a photograph appears as evidence in an investigation – for example in a divorce case, blackmail or insurance fraud – the ability to check whether the image was generated by AI is critical.
Google provides the SynthID Detector portal, which allows independent verification of images, videos and audio recordings. In theory, this gives detectives and legal institutions an effective instrument for distinguishing genuine materials from synthetic ones.
Reverse-SynthID – research that reveals system weaknesses
A research project called reverse-SynthID has appeared on GitHub that uses reverse engineering to analyse the SynthID watermarking mechanism. The researchers, using signal processing and spectral analysis, managed not only to detect but also to remove invisible watermarks from images generated by Google Gemini – without access to the proprietary encoder/decoder.
Key findings of the project:
• Resolution dependency – The watermark embeds carriers at different frequency positions depending on image resolution. A profile for a 1024×1024 image does not work on a 1536×2816 image because the carriers occupy completely different frequency bands.
• Phase consistency – The phase pattern is identical across all images from the same Gemini model. Phase coherence at carrier frequencies exceeds 99.5%, enabling cross-image validation.
• Channel selectivity – The green channel carries the strongest watermark signal. Red and blue channels show weaker embedding.
• Removal effectiveness – The V3 method (Multi-Resolution Spectral Codebook) achieves a carrier energy reduction of 75.8% and a phase coherence drop of 91.4%, while maintaining image quality at 43.5 dB PSNR (virtually imperceptible degradation).
What this means in practice
The results of the reverse-SynthID research have significant implications for digital forensics:
• Watermarks are not irrefutable evidence – The absence of a SynthID watermark does not mean an image is authentic. It can be removed using publicly available techniques.
• The presence of a watermark is an indicator, not a guarantee – Detecting SynthID confirms origin from a Google model but does not replace a full forensic analysis.
• Detection methods must evolve – A single tool is not enough. Effective verification requires a multi-layered approach combining metadata analysis, compression artefacts, lighting inconsistencies and watermarks.
• Legal aspect – In the context of the European AI Act, which mandates labelling of AI-generated content, the vulnerability of watermarking systems to circumvention raises questions about the enforceability of these regulations.
How PalmGroup approaches AI image verification
At the international detective agency PalmGroup, digital material verification is based on multi-level analysis rather than a single tool. Our process includes:
• checking AI watermarks (SynthID, C2PA and other standards)
• analysing EXIF metadata and file editing history
• detecting generative artefacts – inconsistencies in textures, shadows, reflections and proportions
• frequency analysis (FFT) to search for synthesis traces in the spectral domain
• reverse image searching and comparison with source materials
• contextual verification – whether the situation in the photograph could actually have occurred at the given place and time
Experience from the reverse-SynthID project confirms what we know from daily practice: no single technology is sufficient. Only the combination of digital tools with traditional detective work delivers reliable results.
Summary
SynthID technology is an important step towards identifying AI-generated content. However, research such as reverse-SynthID shows that watermarks are not an impenetrable barrier. For detectives and forensic experts, this means that digital verification must remain multi-layered, based on diverse methods and constantly updated in the face of new threats.
At PalmGroup, we continuously monitor the development of AI technologies and their detection methods to provide our clients with the highest level of reliability in analysing evidentiary materials.
Discretion • Effectiveness • International experience
If you need professional verification of the authenticity of digital materials – get in touch with PalmGroup.
