710 points by BoppreH 1343 days ago | 346 comments on HN
| Mild positive Editorial · v3.7· 2026-02-28 13:52:02
Summary Privacy & Data Protection Advocates
This Ask HN self-post advocates for privacy protections and legal remedies against large language models' inadvertent disclosure of personally identifying information. The author frames LLM-based PII leakage as a systemic privacy violation without adequate recourse, contrasting it with GDPR-regulated platforms. The post champions the right to pseudonymity and questions why AI systems are exempt from privacy standards that govern other data processors.
I didn’t anticipate the use case of GTP being used by debt collection agencies to tirelessly track down targets.
It will be a new type of debtors prison where any leaks of enough personally identifying facets to the internet will string together a mosaic of the target such that the AI sends them calls,sms,tinder dms, etc. until they pay and are released from the digital targeting system.
I'm afraid that we are going to see these kinds of issues proliferate rapidly. It's a consequence of the usage of machine learning with extensive amounts of data coming from "data lakes" and similar non-curated sources.
> I try to hide my real name whenever possible, out of an
> abundance of caution. You can still find it if you search
> carefully, but in today's hostile internet I see this kind
> of soft pseudonymity as my digital personal space, and expect
> to have it respected.
Without judging whether the goal is good or not, I will gently point out that your current approach doesn't seem to be effective. A Google search for "BoppreH" turned up several results on the first page with what appears to be your full name, along with other results linking to various emails that have been associated with that name. Results include Github commits, mailing list archives, and third-party code that cited your Github account as "work by $NAME".
As a purely practical matter -- again, not going into whether this is how things should be, merely how they do be -- it is futile to want the internet as a whole to have a concept of privacy, or to respect the concept of a "digital personal space". If your phone number or other PII has ever been associated with your identity, that association will be in place indefinitely and is probably available on multiple data broker sites.
The best way to be anonymous on the internet is to be anonymous, which means posting without any name or identifier at all. If that isn't practical, then using a non-meaningful pseudonym and not posting anything personally identifiable is recommended.
I’m playing with it. After giving it my name, it correctly stated that I moved to Poland in Summer ‘08, but then described how I became some kind of techno musician. I run it again and it says wildly different stuff.
I have to say playing with GPT3 has been a mind blowing experience this week and you should all try it.
The most striking point was discovering that if I give it texts from my own chats, or copy paste in RFPs, and ask it to write lines for me, it’s better at sounding like a normal person than I am.
Sadly, I think the only way to protect against this is with another AI whose job it is to recognize what data is appropriate to reveal and what is private - basically what humans do. But, even then it will probably still be susceptible to tricks. Of course the ideal thing is just to not include it in the training data but I think we know how much effort that would take when the training data is basically the entire internet. I wonder if as AI systems become more efficient and they learn to "forget" information which isn't important and generalize more, that this will become less of an issue.
if you want to stay anonymous online, don't try and hide, don't go for this magical, extremist, non-existent "full anonymity". spray out false information at random. overload the machine. give nothing real, then when you do want to be real, it's impossible to tell
Right, this is why opsec is something that you must always be doing.
Anything you say can be preserved forever.
Better to use short-lived throwaway identities, and leave yourself the power of combining them later, than to start with one long-lived identity and find yourself unable to split it up.
It's inconvenient in real life that I'm expected to use my legal identity for everything. If I go to group therapy for an embarrassing personal problem, someone there can look me up because everyone is using real names. I don't like it.
There is a legitimate question here. A lot of comments are trashing this post because his/her name is already all over the internet. But European laws have the 'right to be forgotten'. Aka you can write to Google and have your personal information removed, should you so wish. How might we address this with a GPT3 like model?
I am sorry for so many comments showing a lack of empathy, basically saying, "what do you expect and do better!". I think you are raising real concerns, these language models will get more and more sophisticated and will basically turn into all knowing oracles. Not just in who you are but what it thinks would be effective in manipulating you.
Interesting how everyone says „But I can google you“ instead of thinking about the issue.
Companies are building and selling GPT-3 with 6 billion parameters and one of those „parameters“ seems to be OP‘s username and his „strange“ two word last name.
If models grow bigger, they will potentially contain personal information about everyone of us.
If you can get yourself removed from search indices, shouldn’t there be a way for AI models, too?
Another thought: do we need new licenses (GPL, MIT, etc.) which disallow the use for (for-profit) AI training?
The comments do not seem to be addressing something very important:
> I don't care much about my name being public, but I don't know what else it might have memorized (political affiliations? Sexual preferences? Posts from 13-year old me?).
Google fuck-ups are much, much more impactful than you'd expect because people have come to trust the information Google provides so automatically. This example is being invoked as comedy, but I see people do it regularly:
So a bigger problem isn't what GPT-3 can memorize, but what associations it may decide to toss out there that people will treat as true facts.
Now think about the amount of work it takes to find out problems. It's wild that you have to to Google your own name every once in a while to see what's being turned up to make sure you're not being misrepresented, but that's not too much work. GPT-3 output, on the other hand, is elicited very contextually. It's not hard to imagine that <There is a Hristo Georgiev who sold Centroida and moved to Zurich> and <There is a Hristo Georgiev who murdered five women> pop up as <Hristo Georgiev, who sold Centroida and moved to Zurich, had murdered five women.> only under certain circumstances that you can't hope to be able to exhaustively discover.
From a personal angle: My birth name is also the pen name of an erotic fiction author. Hazy associations popping up in generated text could go quite poorly for me.
What I find missing in the comments is any examination of the following sequence of hypothetical events:
1) Adversarial input conditioning is utilized to associate an artifact with others, or a behavior.
2) Oblivious victim users of the AI are manipulated into a specific behavior which achieves the adversary's objective.
Imagine a code bot wrongfully attributing you with ownership of pwned code, or misstating the license terms.
Imagine you ask a bot to fill in something like "user@example.com" and instead of filling in (literal) user@example.com it fills in real addresses. Or imagine it's a network diagnostic tool... ooh that's exciting to me.
Past examples of successful campaigns to replace default search results via creative SEO are offered as precedent.
Just flew back from Europe. Still traveling actually.
It used to be that when you hit border control you present your passport.
They don’t ask for that anymore: border control waved a webcam at my face, called out my name, told me I could go through. Never once looked at my passport.
That line seems to come from their Privacy Policy[1]. From my reading it seems to cover the main website and application process for teams requesting access and/or funding. I didn't see anything about the language models themselves.
I'm also not a California resident, but I am under GDPR, which I understand is similarly strong. I'll try emailing them and see where it goes.
Right? This whole thread feels like a joke when the author just removed their full name from their public, open source code 3 hours ago (and only from one of their repos, their name is fully visible in all the other LICENSE.txt files)
> A Google search for "BoppreH" turned up several results on the first page
Not for me. It took until page 3 for just my first name to appear. If somebody is looking at past Github commits, that's already a high enough barrier for me.
I only partially agree with your conclusion. Asking people to maintain total anonymity always, with any slips punishable by permanent publication of that PII, might be the current status quo, but is not where we as society want to head.
I gave up anonymity. I just learned to lean into taking control of my ID. Some time ago, I realized that there's no way for me to participate online, without things being attributed to me.
I learned this, by setting up a Disqus ID. I wanted to comment on a blog post, and started to set up an account.
After I started the process, it came back, with a list of random posts, from around the Internet (and some, very old), and said "Are these yours? If so, would you like to associate them with your account?"
I freaked. Many of them were outright troll comments (I was not always the haloed saint that you see before you) that I had sworn were done anonymously. They came from many different places (including DejaNews). I have no idea how Disqus found them.
Every single one of them was mine. Many, were ones that I had sworn were dead and buried in a deep grave in the mountains.
Needless to say, I do not have a Disqus ID.
Being non-anonymous means that I need to behave myself, online. I come across as a bit of a stuffy bore, but I suspect my IRL persona is that way, as well.
I'm responsible for compliance for a couple of apps. My parent org has a third party very all request come from California residents. I have no clue what the verification involves, but non California requests never make it through to my apps.
I believe in the following sentences very much. However, I believe the value of the internet for any person could possibly be directly correlated with the amount of PII they are willing to share which to me makes this, if, a question of morality, a personal decision.
The sentences that stuck out to me are: “If your phone number or other PII has ever been associated with your identity, that association will be in place indefinitely and is probably available on multiple data broker sites.
The best way to be anonymous on the internet is to be anonymous, which means posting without any name or identifier at all. If that isn't practical, then using a non-meaningful pseudonym and not posting anything personally identifiable is recommended.”
GDPR is rather vague and perhaps it might be an intended feature.
They could:
1. Set up a content filter that filters op's name from the output. OpenAI would still need to keep record of the name, exposing it to leaks.
2. Remove the name from the dataset and retrain the model, which is obviously infeasible with each GDPR request.
I expect there are other instances where it is impractical or impossible to completely forget someone's data upon a request. Does Google send people spelunking into cold storage archives and actually destroy tapes (while migrating the data that is not supposed to be erased) every time they receive a request?
I feel like if OP had actually made an effort to hide this information from search engines and GPT-3 remained the last place from which it was available, this point would be a lot more compelling. Right now it's a "everybody has my name and that's fine, but that includes GPT-3 and that makes GPT-3 bad".
I would expect that it would take considerable effort to get this information removed from Google (you would have to write to them with a request under GDPR or similar and have them add a content filter) and I don't see why the same effort wouldn't allow you to get removed from GPT-3 (which is only accessible via a web API, so a similar filter could be added).
I don't think this is a reasonable fear. It's reasonable to be on guard for some sensitive memorization, but it's not reasonable to fear that a language model will be able to reliably produce information on any given individual. For every person with enough of an online presence to have actually been memorized by GPT-3 or its successors, there are many more that GPT-3 will just produce good-looking nonsense for. It's not possible to distinguish between the two, so creepy surveillance capitalist firms will do better by developing their own specialized models (as they're already doing).
You have no expectation of privacy while being in public. Supreme Court ruled, that anything that a person knowingly exposes to the public, regardless of location, is not protected by the Fourth Amendment.
Same idea works for information. If you expose private information publicly online, it's unreasonable to expect it to remain private.
By creating this post he insured even less privacy. He attracted even more attention, guaranteeing his public "secret" is widely known.
The FTC has a method for dealing with this: they have in the past year or two ordered companies with ML models built from the personal information of minors to completely delete their models.
There are two things you can do in cases like this.
The first is asking a website owner to delete data they collected on you. That doesn't really apply here. The places this person's name is published are his own website that has this username as its url, his own Github repos, and published papers of his that were also on his website. No GDPR request is necessary to remove his name from these places because he already owns that data. As seen, he has already started to delete it himself.
The second is asking search engines to delist a result. As far as I understand, this usually has to involve information that is otherwise meant to be scrubbed from public record, like a newspaper article about a conviction that was eventually sealed. You can't ask Google to not index a scientific journal you published to or your public Github repos.
There are, of course, limits to this thanks to public interest exceptions. I don't believe Prince Andrew can ask Google to de-index anything associating him with Jeffrey Epstein. The public has a right to know, too.
In this guy's case, he really seems to be straddling a line. He contributed to open source projects under his real name linking to a Github repo with the same username he seems to reuse everywhere, including here, and also has a website where the url is that username, and it contained his CV with his real name on it along with a publication history with every publication using his real name. Is it reasonable to do those things and then ask Google and OpenAI not to associate the username with your real name?
At what point are you some regular Joe with a real grievance and at what point are you Ian Murdock complaining that GPT knows you're the Ian associated with debian?
Is it really that different than a search engine? Take away the AI specific language and you have two products that when given his username return results with his real name.
> The best way to be anonymous on the internet is to be anonymous, which means posting without any name or identifier at all. If that isn't practical, then using a non-meaningful pseudonym and not posting anything personally identifiable is recommended.
A third approach is using a word that means something and thus is not unique at all.
Unique strings for usernames means lots of accurate hits. If you google mine, there will be lots of hits but none are me.
I think "had" is at the core of the problem here. How does one become »"had" my name in GPT-3«?
And how is one even supposed to discover that your data is being processed and regurgitated by this company on another continent?
I find this question interesting not so much from a "what are the current international laws/treaties" perspective, but more from a morality "how do we want this to work, ideally?" perspective.
> Another thought: do we need new licenses (GPL, MIT, etc.) which disallow the use for (for-profit) AI training?
I don't think that we need new licenses, but probably open source projects need a better way to enforce them.
E.g. Copilot just ignores the licensing issues although I can imagine that there could be a solution with a few different models that return code for different purposes. (Like one model returns everything and the code can be used safely only for learning or hobby projects. Another model returns code for GPL code. And a third model returns code compatible with commercial or permissive open source projects.)
Or the model spits out also the licence(s) of the code, but not sure if this is technically possible.
The information is embedded in the weights of various layers in the network. Trying to remove that information by editing weights would be like trying to alter someone’s memory by tinkering with synapses.
The only way to be completely sure of removing information would be to re-train the model without that data.
Editorial Channel
What the content says
+0.80
Article 12Privacy
High Advocacy Practice
Editorial
+0.80
SETL
+1.00
Core article. Post strongly advocates for privacy protection against LLM-based PII leakage. 'I try to hide my real name whenever possible...I see this kind of soft pseudonymity as my digital personal space, and expect to have it respected.' Privacy is framed as fundamental right.
FW Ratio: 57%
Observable Facts
Post opens with privacy principle: 'I try to hide my real name whenever possible, out of an abundance of caution...soft pseudonymity as my digital personal space.'
Author describes GPT-3 leak as violation: 'Imagine my surprise when I see it spitting out my (globally unique, unusual) full name!'
Post cites paper on LLMs leaking PII and Google blog on PII removal difficulty.
Author states concern: 'I don't know what else it might have memorized.'
Inferences
Author views soft pseudonymity as a privacy right equivalent to physical privacy expectations.
LLM reproduction of training data PII is presented as a privacy violation because individuals did not consent to AI incorporation.
Structural failure (no remedy mechanism) compounds the privacy violation.
+0.50
PreamblePreamble
Medium Advocacy Framing
Editorial
+0.50
SETL
+0.59
Post advocates for equal protection of fundamental privacy rights in the AI era. Invokes dignity and universal standards (GDPR) as ethical foundation for PII protection.
FW Ratio: 50%
Observable Facts
Post references GDPR as modern standard: 'In the age of GDPR this feels like enormous regression in privacy.'
Author frames privacy expectation as reasonable: 'I see this kind of soft pseudonymity as my digital personal space, and expect to have it respected.'
Inferences
Author positions privacy as a universal fundamental right deserving equal protection across all systems.
The GDPR comparison suggests author believes AI systems should be held to the same standard as regulated data processors.
+0.50
Article 30No Destruction of Rights
High Advocacy
Editorial
+0.50
SETL
+0.63
Post defends against erosion of privacy rights by questioning normalization of LLM PII exposure without recourse. 'Are we supposed to accept this?'
FW Ratio: 67%
Observable Facts
Post asks: 'Are we supposed to accept that large language models may reveal private information, with no recourse?'
Author frames current state as regression: 'In the age of GDPR this feels like enormous regression in privacy.'
Inferences
Author opposes normalization of privacy erosion and advocates for maintaining/restoring privacy protections.
+0.40
Article 8Right to Remedy
High Advocacy Practice
Editorial
+0.40
SETL
+0.88
Central to post. Author directly advocates for effective remedy mechanism: 'Are we supposed to accept that large language models may reveal private information, with no recourse?'
FW Ratio: 60%
Observable Facts
Post explicitly asks: 'Are we supposed to accept that large language models may reveal private information, with no recourse?'
Author notes disparity with regulated platforms: 'GPT-3 seems to have no such support' for removal requests.
Post cites research sources implying lack of available remedies in academic literature.
Inferences
Author advocates for an effective remedy mechanism (comparable to GDPR Right to Be Forgotten) to apply to LLM training data.
The absence of remedies is framed as a critical systemic failure in the right to effective recourse.
+0.30
Article 1Freedom, Equality, Brotherhood
Low Framing
Editorial
+0.30
SETL
+0.30
Implicit framing that equal rights and dignity should extend to protection from privacy violation via AI. Post contrasts own concern with broader systemic issue.
FW Ratio: 50%
Observable Facts
Post states 'I'm more worried about the consequences of language models in general than my own case' suggesting universal concern.
Inferences
Author frames privacy protection as a universal right that should apply to all individuals equally.
+0.30
Article 7Equality Before Law
Medium Advocacy Framing
Editorial
+0.30
SETL
+0.35
Post advocates for equal legal protection by invoking GDPR as standard that should govern AI systems. Questions why LLMs are exempt from privacy law.
FW Ratio: 50%
Observable Facts
Post compares LLM lack of remedy to Google/Facebook removals: 'If I had found my personal information on Google search results, or Facebook, I could ask the information to be removed, but GPT-3 seems to have no such support.'
Author frames disparity as legal regression: 'In the age of GDPR this feels like enormous regression.'
Inferences
Author argues that AI systems should be subject to the same privacy law as regulated platforms.
The regulatory gap is presented as a legal inequality that disadvantages privacy rights holders.
+0.30
Article 28Social & International Order
Medium Framing
Editorial
+0.30
SETL
+0.39
Post appeals to international/social order by invoking GDPR as a global privacy standard. Frames AI privacy gap as violation of universal norms.
FW Ratio: 50%
Observable Facts
Post references GDPR as modern standard: 'In the age of GDPR this feels like enormous regression.'
Inferences
Author advocates for a unified global standard that extends GDPR-equivalent privacy protections to AI systems.
+0.20
Article 6Legal Personhood
Low Framing
Editorial
+0.20
SETL
+0.20
Post questions what personal identity attributes the model has retained, implying concern over being recognized as a unique individual without control.
FW Ratio: 50%
Observable Facts
Author asks: 'I don't know what else it might have memorized (political affiliations? Sexual preferences? Posts from 13-year old me?)'
Inferences
Author views the multidimensional aspects of personal identity as sensitive and deserving of individual control.
+0.20
Article 19Freedom of Expression
Medium Advocacy
Editorial
+0.20
SETL
0.00
Post exercises freedom of expression by asking a public question and advocating for policy change regarding AI privacy.
FW Ratio: 67%
Observable Facts
Post is a public question on HN exercising freedom of speech.
Author invites dialogue: 'Are we supposed to accept...' frames query as open call for input.
Inferences
Author uses freedom of expression to advocate for privacy policy changes.
+0.20
Article 21Political Participation
Low Advocacy
Editorial
+0.20
SETL
+0.14
Post implicitly advocates for collective problem-solving by asking HN community for input. Frames privacy as systemic concern requiring shared response.
FW Ratio: 67%
Observable Facts
Author states: 'I'm more worried about the consequences of language models in general than my own case.'
Post solicits community input: 'What's the current status...?'
Inferences
Author advocates for collective action or policy response, not just individual remedy.
+0.20
Article 29Duties to Community
Low Framing
Editorial
+0.20
SETL
+0.24
Post frames privacy protection as a collective responsibility. Author acknowledges personal infosec limits but advocates for systemic duty.
FW Ratio: 67%
Observable Facts
Edit thanks community for respecting anonymity: 'a small thank you for everybody commenting so far for not directly linking to specific results or actually writing my name.'
Post acknowledges systemic concern: 'I'm more worried about the consequences of language models in general.'
Inferences
Author recognizes community norms around privacy respect and appeals to collective responsibility.
+0.10
Article 17Property
Low Framing
Editorial
+0.10
SETL
+0.35
Tangentially engaged. Personal information (identity, attributes) could be viewed as a form of personal property or dignitary interest. Weak signal.
FW Ratio: 50%
Observable Facts
Author discusses 'my personal information' with concern over what is 'memorized,' implying possessory interest.
Inferences
Author's framing suggests viewing personal attributes as belonging to the individual.
+0.10
Article 22Social Security
Low Framing
Editorial
+0.10
SETL
+0.24
Tangentially engaged. Privacy protection can be viewed as a form of social security/welfare. Post identifies systemic gap but no solution is offered.
FW Ratio: 50%
Observable Facts
Post identifies systemic welfare gap: no remedy exists for individuals whose PII is leaked by LLMs.
Inferences
Author implies privacy protection should be a collectively guaranteed form of social welfare.
+0.10
Article 26Education
Low Advocacy
Editorial
+0.10
SETL
+0.10
Post contributes to collective education on LLM privacy risks by citing research and raising awareness within the tech community.
FW Ratio: 50%
Observable Facts
Post includes citations to arxiv paper, Google blog, Register article—sharing knowledge with community.
Inferences
Educational value of post raises awareness about privacy risks in AI systems.
ND
Article 2Non-Discrimination
Not engaged; post does not address discrimination or status-based exclusion.
ND
Article 3Life, Liberty, Security
Not engaged.
ND
Article 4No Slavery
Not engaged.
ND
Article 5No Torture
Not engaged.
ND
Article 9No Arbitrary Detention
Not engaged.
ND
Article 10Fair Hearing
Not engaged.
ND
Article 11Presumption of Innocence
Not engaged.
ND
Article 13Freedom of Movement
Not engaged.
ND
Article 14Asylum
Not engaged.
ND
Article 15Nationality
Not engaged.
ND
Article 16Marriage & Family
Not engaged.
ND
Article 18Freedom of Thought
Not engaged.
ND
Article 20Assembly & Association
Not engaged; post is discussion, not assembly.
ND
Article 23Work & Equal Pay
Not engaged.
ND
Article 24Rest & Leisure
Not engaged.
ND
Article 25Standard of Living
Not engaged.
ND
Article 27Cultural Participation
Not engaged.
Structural Channel
What the site does
+0.20
Article 19Freedom of Expression
Medium Advocacy
Structural
+0.20
Context Modifier
ND
SETL
0.00
HN structure enables free expression through pseudonymous posting and open community discussion without prior censorship.
+0.10
Article 21Political Participation
Low Advocacy
Structural
+0.10
Context Modifier
ND
SETL
+0.14
HN's discussion forum enables participatory input but has no formal democratic mechanism for policy change.
0.00
Article 1Freedom, Equality, Brotherhood
Low Framing
Structural
0.00
Context Modifier
ND
SETL
+0.30
HN treats all users equally under moderation; no preferential privacy status.
0.00
Article 6Legal Personhood
Low Framing
Structural
0.00
Context Modifier
ND
SETL
+0.20
HN allows pseudonymous recognition as user without requiring identity disclosure.
0.00
Article 26Education
Low Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.10
HN is a community knowledge-sharing forum; no formal educational structure.
-0.10
Article 7Equality Before Law
Medium Advocacy Framing
Structural
-0.10
Context Modifier
ND
SETL
+0.35
HN operates under U.S. law without GDPR applicability; OpenAI's governance is fragmented across jurisdictions.
-0.10
Article 29Duties to Community
Low Framing
Structural
-0.10
Context Modifier
ND
SETL
+0.24
OpenAI and other AI companies have not accepted duty to protect individual privacy in training data.
-0.20
PreamblePreamble
Medium Advocacy Framing
Structural
-0.20
Context Modifier
ND
SETL
+0.59
HN enables discussion but provides no remedy mechanisms for disclosed information. LLM systems (subject of post) have no built-in privacy protections or recourse options.
-0.20
Article 22Social Security
Low Framing
Structural
-0.20
Context Modifier
ND
SETL
+0.24
No entity (HN, OpenAI, regulators) has provided social security mechanism for privacy violation victims.
-0.20
Article 28Social & International Order
Medium Framing
Structural
-0.20
Context Modifier
ND
SETL
+0.39
Global internet and AI systems operate across jurisdictions; no unified international privacy order for AI training data.
-0.30
Article 17Property
Low Framing
Structural
-0.30
Context Modifier
ND
SETL
+0.35
LLMs assert no ownership accountability over training data or individual persons' information incorporated without consent.
-0.30
Article 30No Destruction of Rights
High Advocacy
Structural
-0.30
Context Modifier
ND
SETL
+0.63
LLMs and internet infrastructure de facto erode privacy. Structural systems do not prevent this erosion.
-0.60
Article 12Privacy
High Advocacy Practice
Structural
-0.60
Context Modifier
ND
SETL
+1.00
LLM architecture has no built-in privacy protections for training data and will reproduce PII from corpus. OpenAI offers no privacy-protective mechanism. HN itself stores posts permanently and publicly, supporting information exposure.
-0.70
Article 8Right to Remedy
High Advocacy Practice
Structural
-0.70
Context Modifier
ND
SETL
+0.88
OpenAI provides no published mechanism for requesting PII removal from training data. HN cannot remedy OpenAI's practice. Structural gap is definitive.
ND
Article 2Non-Discrimination
Not engaged.
ND
Article 3Life, Liberty, Security
Not engaged.
ND
Article 4No Slavery
Not engaged.
ND
Article 5No Torture
Not engaged.
ND
Article 9No Arbitrary Detention
Not engaged.
ND
Article 10Fair Hearing
Not engaged.
ND
Article 11Presumption of Innocence
Not engaged.
ND
Article 13Freedom of Movement
Not engaged.
ND
Article 14Asylum
Not engaged.
ND
Article 15Nationality
Not engaged.
ND
Article 16Marriage & Family
Not engaged.
ND
Article 18Freedom of Thought
Not engaged.
ND
Article 20Assembly & Association
Not engaged.
ND
Article 23Work & Equal Pay
Not engaged.
ND
Article 24Rest & Leisure
Not engaged.
ND
Article 25Standard of Living
Not engaged.
ND
Article 27Cultural Participation
Not engaged.
Supplementary Signals
How this content communicates, beyond directional lean. Learn more
build 6ae9671+7klc · deployed 2026-02-28 16:24 UTC · evaluated 2026-02-28 16:29:11 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.