H
HN HRCB stories | rights | sources | trends | system | about
home / deepmind.google / item 47169146
+0.05 SynthID (deepmind.google)
62 points by tosh 5 hours ago | 67 comments on HN | Neutral Product · v3.7 ·
Summary AI Transparency & Content Authentication Acknowledges
The SynthID product page presents a tool for watermarking and identifying AI-generated content, framed within a mission to build AI responsibly. The page's substantive content is incomplete, but observable navigation messaging emphasizes responsibility and human benefit. Key signals: moderate positive on information transparency (Article 19) from the product's identification purpose; mild positive on mission and safety framing; mild negative on privacy due to tracking without visible consent mechanisms.
Article Heatmap
Preamble: +0.15 — Preamble P Article 1: +0.15 — Freedom, Equality, Brotherhood 1 Article 2: ND — Non-Discrimination Article 2: No Data — Non-Discrimination 2 Article 3: +0.13 — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: -0.23 — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: -0.23 — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: ND — Freedom of Thought Article 18: No Data — Freedom of Thought 18 Article 19: +0.28 — Freedom of Expression 19 Article 20: ND — Assembly & Association Article 20: No Data — Assembly & Association 20 Article 21: ND — Political Participation Article 21: No Data — Political Participation 21 Article 22: ND — Social Security Article 22: No Data — Social Security 22 Article 23: ND — Work & Equal Pay Article 23: No Data — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: ND — Standard of Living Article 25: No Data — Standard of Living 25 Article 26: ND — Education Article 26: No Data — Education 26 Article 27: ND — Cultural Participation Article 27: No Data — Cultural Participation 27 Article 28: +0.13 — Social & International Order 28 Article 29: ND — Duties to Community Article 29: No Data — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Weighted Mean +0.05 Unweighted Mean +0.05
Max +0.28 Article 19 Min -0.23 Article 8
Signal 7 No Data 24
Confidence 14% Volatility 0.18 (Medium)
Negative 2 Channels E: 0.5 S: 0.5
SETL +0.11 Editorial-dominant
FW Ratio 52% 15 facts · 14 inferences
Evidence: High: 0 Medium: 7 Low: 0 No Data: 24
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.15 (2 articles) Security: 0.13 (1 articles) Legal: -0.23 (1 articles) Privacy & Movement: -0.23 (1 articles) Personal: 0.00 (0 articles) Expression: 0.28 (1 articles) Economic & Social: 0.00 (0 articles) Cultural: 0.00 (0 articles) Order & Duties: 0.13 (1 articles)
HN Discussion 18 top-level · 19 replies
u1hcw9nx 2026-02-26 17:34 UTC link
This technology could be used to copyrights as well.

>The watermark doesn’t change the image or video quality. It’s added the moment content is created, and designed to stand up to modifications like cropping, adding filters, changing frame rates, or lossy compression.

But does it survive if you use another generative image model to replicate the image?

andrewmcwatters 2026-02-26 18:13 UTC link
I wonder how it stands up to feature analysis.

"Generate a pure white image." "Generate a pure black image." Channel diff, extract steganographic signature for analysis.

throwaway13337 2026-02-26 18:18 UTC link
These sorts of tools will only be able to positively identify a subset of genAI content. But I suspect that people will use it to 'prove' something is not genAI.

In a sense, the identifier company can be an arbiter of the truth. Powerful.

Training people on a half-solution like this might do more harm than good.

gregorkas 2026-02-26 18:18 UTC link
I genuinely feel that in this AI world we need the inverse. That every analogue or digital photo taken by traditional means of photography will need to be signed by a certificate, so anyone can verify its authenticity.
squigz 2026-02-26 18:27 UTC link
Looks like there's a lot more info here, at least about the text version.

https://ai.google.dev/responsible/docs/safeguards/synthid

kingstnap 2026-02-26 18:28 UTC link
It's security through obscurity. I'm sure with the technical details or even just sufficient access to a predictive oracle you could break this.

But I suppose it ads friction so better than nothing.

Watermarking text without affecting it is an interesting seemingly weird idea. Does it work any better than (with knowledge of the model used to produce said text), just observing the perplexity is low because its "on policy" generated text.

PaulHoule 2026-02-26 18:39 UTC link

   ...But it can be hard to tell the difference between content that’s been 
   AI-generated, and content created without AI.
Pro-Tip: Something like that Sherbet colored dog is always AI generated
zelias 2026-02-26 18:48 UTC link
Seems like this really just validates whether a piece of AI content was generated by Google, not AI generated in general

What incentive do open models have to adopt this?

parliament32 2026-02-26 18:51 UTC link
Note that watermarking (yes, including text) is a requirement[1] of the EU AI Act, and goes into effect in August 2026, so I suspect we'll see a lot more work in this space in the near future.

[1] Specifically, "...synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated", see https://artificialintelligenceact.eu/article/50/

ChrisArchitect 2026-02-26 18:57 UTC link
something new here OP?

Some previous discussion:

https://news.ycombinator.com/item?id=45071677

geor9e 2026-02-26 18:59 UTC link
This is from 2025. Did something new happen? What am I missing here?
Aldipower 2026-02-26 19:02 UTC link
As a synthesizer collector with serious GAS I find this particular name very offensive.
jamiecode 2026-02-26 19:08 UTC link
The text watermarking is the more interesting problem here. Image watermarking is fairly tractable - you can embed a robust signal in spatial or frequency domains. Text watermarking works by biasing token selection at generation time, and detection is a statistical test over that distribution.

Which means short texts are basically useless. A 50-token reply has too little signal for the test to reach any confidence. The original SynthID text paper puts minimum viable detection at a few hundred tokens - so for most real-world cases (emails, short posts, one-liners) it just doesn't work.

The other thing: paraphrase attacks break it. Ask any other model to rewrite watermarked text and the watermark is gone, because you're now sampling from a different distribution. EU compliance built on top of this feels genuinely fragile for anything other than long-form content from controlled providers.

ks2048 2026-02-26 19:20 UTC link
How about a database of verified non-AI images?

I'm thinking of historical images, where there aren't a huge number of existing images and no more will ever be created.

If I see something labeled "Street scene in Paris, 1905". I want to know if it is legit.

galleywest200 2026-02-26 19:21 UTC link
This is great, but there is no way for me to verify if groups or nation states can pay for a special contract where they do not have to have their outputs watermarked.
gigel82 2026-02-26 19:24 UTC link
Reposting a comment I made on an earlier thread on this.

We need to be super careful with how legislation around this is passed and implemented. As it currently stands, I can totally see this as a backdoor to surveillance and government overreach.

If social media platforms are required by law to categorize content as AI generated, this means they need to check with the public "AI generation" providers. And since there is no agreed upon (public) standard for imperceptible watermarks hashing that means the content (image, video, audio) in its entirety needs to be uploaded to the various providers to check if it's AI generated.

Yes, it sounds crazy, but that's the plan; imagine every image you post on Facebook/X/Reddit/Whatsapp/whatever gets uploaded to Google / Microsoft / OpenAI / UnnamedGovernmentEntity / etc. to "check if it's AI". That's what the current law in Korea and the upcoming laws in California and EU (for August 2026) require :(

manbash 2026-02-26 19:30 UTC link
It's nice that they explain the "what" (...it is doing) but not the "why". Who is going to use it and for what reasons?

Also, if it's essentially a sort of metadata, can't the output generated image be replicated (e.g. screenshot) and thus stripped of any such data?

ekjhgkejhgk 2026-02-26 19:33 UTC link
Is there a paper for this?
lxgr 2026-02-26 18:12 UTC link
> This technology could be used to copyrights as well.

That's been a thing for a while: https://en.wikipedia.org/wiki/Digital_watermarking

nerdsniper 2026-02-26 18:20 UTC link
Extremely doubtful, due to the way that embedding and diffusion works. I would be utterly floored if they had achieved that.
greensoap 2026-02-26 18:21 UTC link
It will just be an arms race if we try to prove "not genAI." Detectors will improve, genAI will improve without marking (opensource and state actors will have unmarked genAI even if we mandate it).

Marking real from lense through digital life is more practical. But then what do we do with all the existing hardware that doesn't mark real and media that preexisited this problem.

hedora 2026-02-26 18:22 UTC link
Some cameras support this, but usually only for raw.

Note that your cell phone camera is using gen AI techniques to counteract sensor noise.

Was that famous person in the background really there, or a hallucination filling in static?

Who knows at this point? So, the signatures you proposed need to have some nuance around what they’re asserting.

sippeangelo 2026-02-26 18:25 UTC link
It is actively harmful to society. Slap SynthID on some of the photographic evidence from the unreleased Epstein files and instantly de-legitimize it. Launder a SynthID image through a watermark free model and it's legit again. The fact that it exists at all can't be interpreted in any other way than malice.
observationist 2026-02-26 18:26 UTC link
You could take a picture or video with your phone of a screen or projection of an altered media and thereby capture a watermarked "verified" image or video.

None of these schemes for validation of digital media will work. You need a web of trust, repeated trustworthy behavior by an actor demonstrating fidelity.

You need people and institutions you can trust, who have the capability of slogging through the ever more turbulent and murky sea of slop and using correlating evidence and scientific skepticism and all the cognitive tools available to get at reality. Such people and institutions exist. You can also successfully proxy validation of sources by identifying people or groups good at identifying primary sources.

When people and institutions defect, as many legacy media, platforms, talking heads, and others have, you need to ruthlessly cut them out of your information feed. When or if they correct their mistake, just follow tit for tat, and perhaps they can eventually earn back their place in the de-facto web of trust.

Google's stamp of approval means less than nothing to me; it's a countersignal, indicating I need to put even more effort than otherwise to confirm the truthfulness of any claims accompanied by their watermark.

amingilani 2026-02-26 18:26 UTC link
I just tried this idea, and it looks like it isn't that simple.

> "Generate a pure white image."

It refused no matter how I phrased it ¯\_(ツ)_/¯

> "Generate a pure black image."

It did give me one. In a new chat, I asked Gemini to detect SynthID with "@synthid". It responded with:

> The image contains too little information to make a diagnosis regarding whether it was created with Google AI. It is primarily a solid black field, and such content typically lacks the necessary data for SynthID to provide a definitive result.

Further research: Does a gradient trigger SynthID? IDK, I have to get back to work.

gumby271 2026-02-26 18:27 UTC link
I'm sure Apple would love that too. More seriously, would that also mean all editing tools would need to re-sign a photo that was previously signed by the original sensor. How do we distinguish an edit that's misleading vs just changing levels? It's an interesting area for sure, but this inverse approach seems much trickier.
yjftsjthsd-h 2026-02-26 18:32 UTC link
And how do you fix the analog hole? Because if you can point your "verified" camera at a sufficiently high-resolution screen, we're worse off than when we started.
Coeur 2026-02-26 18:33 UTC link
This already exists: https://c2pa.org , https://en.wikipedia.org/wiki/Content_Authenticity_Initiativ... . Support by camera makers is - spotty.
pavel_lishin 2026-02-26 18:41 UTC link
You'd be surprised what dog owners do sometimes.
raincole 2026-02-26 18:58 UTC link
EU really like unenforceable regulations, doesn't it?
alibero 2026-02-26 19:00 UTC link
I've been looking into this. There seems to be some mostly-repeating 2D pattern in the LSB of the generated images. The magnitude of the noise seems to be larger in the pure black image vs pure white image. My main goal is to doctor a real image to flag as positive for SynthID, but I imagine if you smoothed out the LSB, you might be able to make images (especially very bright images) no longer flag as SynthID? Of course, it's possible there's also noise in here from the image-generation process...

Gemini really doesn't like generating pure-white images but you can ask it to generate a "photograph of a pure-white image with a black border" and then crop it. So far I've just been looking at pure images and gradients, it's possible that more complex images have SynthID embedded in a more complicated way (e.g. a specific pattern in an embedding space).

elpocko 2026-02-26 19:08 UTC link
It doesn't. I don't have a link for you right now but there was a post on reddit recently showing that SynthID is removed from images by passing the image through a diffusion model for a single step at low denoise. The output image is identical to the input image (to the human eye).
dpe82 2026-02-26 19:24 UTC link
The act doesn't explicitly require watermarking, does it?
doctorpangloss 2026-02-26 19:25 UTC link
haha "you" say this, when your comment was written by an LLM! it's watermarked!
pegasus 2026-02-26 19:33 UTC link
Long-form content from controlled providers is by far the lion's share of what needs this regulation, at least at the moment. Perfect is the enemy of good enough. Or at least of better than the status-quo.
osculum 2026-02-26 19:33 UTC link
Years ago, I worked at Apple at the same time as Ian Goodfellow. This was before ChatGPT (I'd say around 2019).

I had the chance to chat with him, and what I remember most was his concern that GANs would eventually be able to generate images indistinguishable from reality, and that this would create a misinformation problem. He argued for exactly what you’re mentioning: chips that embed cryptographic proof that a photo was captured by a camera and haven't been modified.

ekjhgkejhgk 2026-02-26 19:34 UTC link
Link to the paper please?
Editorial Channel
What the content says
+0.35
Article 19 Freedom of Expression
Medium F-Purpose A-Advocacy
Editorial
+0.35
SETL
+0.23

SynthID's stated purpose is 'Watermark and identify AI content.' This directly enables detection and authentication of content source, supporting the right to seek and receive truthful information. Explicit identification (rather than hidden manipulation) aligns with transparency principles

+0.20
Preamble Preamble
Medium A-Mission
Editorial
+0.20
SETL
+0.14

Navigation contains mission statement 'Our mission is to build AI responsibly to benefit humanity,' which explicitly references human benefit and dignity

+0.20
Article 1 Freedom, Equality, Brotherhood
Medium A-Mission
Editorial
+0.20
SETL
+0.14

Mission statement implies all human beings should benefit from technological advancement, suggesting universal rather than particular interests

+0.15
Article 28 Social & International Order
Medium A-Mission
Editorial
+0.15
SETL
+0.09

Mission statement frames AI development as requiring responsibility and serving humanity, relating to establishment of 'social and international order' that respects rights. Suggests commitment to rights-respecting development environment

+0.10
Article 3 Life, Liberty, Security
Medium A-Advocacy
Editorial
+0.10
SETL
-0.09

Navigation references 'AI safety through proactive security, even against evolving threats.' SynthID's function (identifying AI content) aligns with detection of potential synthetic threats to security

-0.20
Article 8 Right to Remedy
Medium P-Tracking
Editorial
-0.20
SETL
+0.11

No discussion of privacy rights or protections. Page content does not acknowledge or address privacy considerations

-0.20
Article 12 Privacy
Medium P-Tracking
Editorial
-0.20
SETL
+0.11

No discussion of privacy in personal matters or correspondence. Content is silent on privacy protections

ND
Article 2 Non-Discrimination

No observable content addressing non-discrimination or protection from discrimination

ND
Article 4 No Slavery

No observable content on slavery or servitude

ND
Article 5 No Torture

No observable content on torture or degrading treatment

ND
Article 6 Legal Personhood

No observable content on right to recognition before law

ND
Article 7 Equality Before Law

No observable content on equal protection before law

ND
Article 9 No Arbitrary Detention

No observable content on arbitrary arrest or detention

ND
Article 10 Fair Hearing

No observable content on fair hearing or due process

ND
Article 11 Presumption of Innocence

No observable content on presumption of innocence

ND
Article 13 Freedom of Movement

No observable content on freedom of movement

ND
Article 14 Asylum

No observable content on asylum or refuge

ND
Article 15 Nationality

No observable content on nationality

ND
Article 16 Marriage & Family

No observable content on marriage and family

ND
Article 17 Property

No observable content on property rights

ND
Article 18 Freedom of Thought

No observable content on freedom of thought, conscience, religion

ND
Article 20 Assembly & Association

No observable content on freedom of assembly or association

ND
Article 21 Political Participation

No observable content on participation in government

ND
Article 22 Social Security

No observable content on social security or economic rights

ND
Article 23 Work & Equal Pay

No substantive content on work or employment conditions

ND
Article 24 Rest & Leisure

No observable content on rest and leisure

ND
Article 25 Standard of Living

No observable content on health, food, clothing, housing

ND
Article 26 Education

No substantive content on education

ND
Article 27 Cultural Participation

No observable content on cultural participation or intellectual property

ND
Article 29 Duties to Community

No observable content on duties or limitations

ND
Article 30 No Destruction of Rights

No observable content on prevention of rights destruction

Structural Channel
What the site does
+0.20
Article 19 Freedom of Expression
Medium F-Purpose A-Advocacy
Structural
+0.20
Context Modifier
ND
SETL
+0.23

Product page presents SynthID as a tool for content verification and identification. Navigation structure positions this within broader responsibility and research framework. However, no detail on effectiveness, limitations, or safeguards against misuse

+0.15
Article 3 Life, Liberty, Security
Medium A-Advocacy
Structural
+0.15
Context Modifier
ND
SETL
-0.09

Page structure emphasizes responsibility and safety as core organizational values; responsibility section prominently featured in navigation hierarchy

+0.10
Preamble Preamble
Medium A-Mission
Structural
+0.10
Context Modifier
ND
SETL
+0.14

Page structure includes 'Responsibility' navigation with emphasis on 'AI safety through proactive security.' Suggests organizational commitment to human-centered design

+0.10
Article 1 Freedom, Equality, Brotherhood
Medium A-Mission
Structural
+0.10
Context Modifier
ND
SETL
+0.14

Navigation presents research areas and models with equal structural prominence; no hierarchical distinction between groups

+0.10
Article 28 Social & International Order
Medium A-Mission
Structural
+0.10
Context Modifier
ND
SETL
+0.09

Navigation emphasizes responsibility, safety, and research transparency as organizational values. Structure suggests commitment to accountability and external-facing oversight

-0.25
Article 8 Right to Remedy
Medium P-Tracking
Structural
-0.25
Context Modifier
ND
SETL
+0.11

Page implements Google Analytics, Google Tag Manager, and DoubleClick ad tracking. Multiple third-party tracking domains whitelisted in security policy. No visible privacy policy link or cookie consent banner on product page

-0.25
Article 12 Privacy
Medium P-Tracking
Structural
-0.25
Context Modifier
ND
SETL
+0.11

Tracking infrastructure (Google Analytics, GTM, ad networks) enables collection of behavioral data without visible privacy controls or consent on product page

ND
Article 2 Non-Discrimination

No structural signals observable

ND
Article 4 No Slavery

No structural signals observable

ND
Article 5 No Torture

No structural signals observable

ND
Article 6 Legal Personhood

No structural signals observable

ND
Article 7 Equality Before Law

No structural signals observable

ND
Article 9 No Arbitrary Detention

No structural signals observable

ND
Article 10 Fair Hearing

No structural signals observable

ND
Article 11 Presumption of Innocence

No structural signals observable

ND
Article 13 Freedom of Movement

No structural signals observable

ND
Article 14 Asylum

No structural signals observable

ND
Article 15 Nationality

No structural signals observable

ND
Article 16 Marriage & Family

No structural signals observable

ND
Article 17 Property

No structural signals observable

ND
Article 18 Freedom of Thought

No structural signals observable

ND
Article 20 Assembly & Association

No structural signals observable

ND
Article 21 Political Participation

No structural signals observable

ND
Article 22 Social Security

No structural signals observable

ND
Article 23 Work & Equal Pay

Navigation includes 'Careers' link with text 'We're looking for people who want to make a real, positive impact on the world,' suggesting values alignment but insufficient detail for scoring

ND
Article 24 Rest & Leisure

No structural signals observable

ND
Article 25 Standard of Living

No structural signals observable

ND
Article 26 Education

Navigation includes 'Education' link in 'Learn more' section suggesting institutional commitment, but insufficient detail for substantive scoring

ND
Article 27 Cultural Participation

No structural signals observable

ND
Article 29 Duties to Community

No structural signals observable

ND
Article 30 No Destruction of Rights

Navigation references to responsibility and research suggest attention to misuse prevention, but insufficient detail for substantive scoring

Supplementary Signals
Epistemic Quality
0.42
Propaganda Flags
0 techniques detected
Solution Orientation
No data
Emotional Tone
No data
Stakeholder Voice
No data
Temporal Framing
No data
Geographic Scope
No data
Complexity
No data
Transparency
No data
Event Timeline 20 events
2026-02-26 20:01 dlq Dead-lettered after 1 attempts: SynthID - -
2026-02-26 20:01 eval_failure Evaluation failed: Error: Unknown model in registry: llama-4-scout-wai - -
2026-02-26 20:01 eval_failure Evaluation failed: Error: Unknown model in registry: llama-4-scout-wai - -
2026-02-26 20:01 dlq Dead-lettered after 1 attempts: SynthID - -
2026-02-26 19:59 dlq Dead-lettered after 1 attempts: SynthID - -
2026-02-26 19:59 eval_failure Evaluation failed: Error: Unknown model in registry: llama-4-scout-wai - -
2026-02-26 19:59 eval_failure Evaluation failed: Error: Unknown model in registry: llama-4-scout-wai - -
2026-02-26 19:59 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 19:58 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 19:57 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 19:56 dlq Dead-lettered after 1 attempts: SynthID - -
2026-02-26 19:54 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 19:52 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 19:51 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 18:40 dlq Dead-lettered after 1 attempts: SynthID - -
2026-02-26 18:40 dlq Dead-lettered after 1 attempts: SynthID - -
2026-02-26 18:40 dlq Dead-lettered after 1 attempts: SynthID - -
2026-02-26 18:39 dlq Dead-lettered after 1 attempts: SynthID - -
2026-02-26 18:39 dlq Dead-lettered after 1 attempts: SynthID - -
2026-02-26 18:39 dlq Dead-lettered after 1 attempts: SynthID - -
About HRCB | By Right | HN Guidelines | HN FAQ | Source | UDHR | RSS
build d633cd0+ahgg · deployed 2026-02-26 22:27 UTC · evaluated 2026-02-26 22:10:52 UTC