+0.51 We Will Not Be Divided (notdivided.org S:+0.53 )
1301 points by BloondAndDoom 6 hours ago | 474 comments on HN | Moderate positive Mixed · v3.7 · 2026-02-28 07:42:49
Summary Free Expression & Assembly Advocates
This site hosts an employee-organized open letter opposing an unnamed AI policy (DoW) through a verified-signature platform emphasizing collective expression and democratic participation. The platform strongly advocates for Articles 19-20 (free expression, assembly, association) while implementing robust privacy protections under Article 12, including anonymous signing and 24-hour data deletion. Overall, the site champions employee voice and rights protection in AI governance decisions.
Article Heatmap
Preamble: +0.25 — Preamble P Article 1: ND — Freedom, Equality, Brotherhood Article 1: No Data — Freedom, Equality, Brotherhood 1 Article 2: ND — Non-Discrimination Article 2: No Data — Non-Discrimination 2 Article 3: ND — Life, Liberty, Security Article 3: No Data — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: +0.68 — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: +0.43 — Freedom of Thought 18 Article 19: +0.75 — Freedom of Expression 19 Article 20: +0.70 — Assembly & Association 20 Article 21: +0.32 — Political Participation 21 Article 22: ND — Social Security Article 22: No Data — Social Security 22 Article 23: ND — Work & Equal Pay Article 23: No Data — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: ND — Standard of Living Article 25: No Data — Standard of Living 25 Article 26: ND — Education Article 26: No Data — Education 26 Article 27: ND — Cultural Participation Article 27: No Data — Cultural Participation 27 Article 28: ND — Social & International Order Article 28: No Data — Social & International Order 28 Article 29: ND — Duties to Community Article 29: No Data — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Editorial Mean +0.51 Structural Mean +0.53
Weighted Mean +0.55 Unweighted Mean +0.52
Max +0.75 Article 19 Min +0.25 Preamble
Signal 6 No Data 25
Confidence 19% Volatility 0.20 (Medium)
Negative 0 Channels E: 0.6 S: 0.4
SETL -0.04 Structural-dominant
FW Ratio 64% 30 facts · 17 inferences
Evidence: High: 3 Medium: 6 Low: 0 No Data: 22
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.25 (1 articles) Security: 0.00 (0 articles) Legal: 0.00 (0 articles) Privacy & Movement: 0.68 (1 articles) Personal: 0.43 (1 articles) Expression: 0.59 (3 articles) Economic & Social: 0.00 (0 articles) Cultural: 0.00 (0 articles) Order & Duties: 0.00 (0 articles)
HN Discussion 20 top-level · 30 replies
codepoet80 2026-02-28 01:12 UTC link
Nicely done. Hold this line — there’s got to be one somewhere.
davidw 2026-02-28 01:18 UTC link
"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.
rabbitlord 2026-02-28 01:31 UTC link
I am not a fan of Anthropic guys, but this time I stand with it. We all should.
txrx0000 2026-02-28 01:35 UTC link
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.

It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.

Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.

Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.

thimabi 2026-02-28 01:44 UTC link
The problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.

I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.

_aavaa_ 2026-02-28 02:02 UTC link
Yes, take disparate sets of employees and like, oh idk unionize while you still have power.
Meekro 2026-02-28 02:06 UTC link
I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.

Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.

What, then, is this really about?

dang 2026-02-28 02:09 UTC link
Here's the sequence (so far) in reverse order - did I miss any important threads?

Statement on the comments from Secretary of War Pete Hegseth - https://news.ycombinator.com/item?id=47188697 - Feb 2026 (31 comments)

I am directing the Department of War to designate Anthropic a supply-chain risk - https://news.ycombinator.com/item?id=47186677 - Feb 2026 (872 comments)

President Trump bans Anthropic from use in government systems - https://news.ycombinator.com/item?id=47186031 - Feb 2026 (111 comments)

Google workers seek 'red lines' on military A.I., echoing Anthropic - https://news.ycombinator.com/item?id=47175931 - Feb 2026 (132 comments)

Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1527 comments)

The Pentagon Feuding with an AI Company Is a Bad Sign - https://news.ycombinator.com/item?id=47168165 - Feb 2026 (33 comments)

Tech companies shouldn't be bullied into doing surveillance - https://news.ycombinator.com/item?id=47160226 - Feb 2026 (157 comments)

The Pentagon threatens Anthropic - https://news.ycombinator.com/item?id=47154983 - Feb 2026 (125 comments)

US Military leaders meet with Anthropic to argue against Claude safeguards - https://news.ycombinator.com/item?id=47145551 - Feb 2026 (99 comments)

Hegseth gives Anthropic until Friday to back down on AI safeguards - https://news.ycombinator.com/item?id=47142587 - Feb 2026 (128 comments)

kace91 2026-02-28 02:17 UTC link
Among other consequences, if Anthropic ends up being killed it’s going to be just another nail in the coffin of trust in America.

Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.

david_shaw 2026-02-28 02:32 UTC link
I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.

I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.

It's incredible how quickly we've devolved into full-blown sci-fi dystopia.

dataflow 2026-02-28 02:36 UTC link
Why are the signing employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?

Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given both employers probably monitor anything you do on your devices, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you wouldn't ask for the email link, you might as well use the alternate verification option.

Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.

P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.

ArchieScrivener 2026-02-28 02:46 UTC link
The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid. Without government spending, employment, and contracts, the USA would be net negative growth.

Now the DoD, who are by far the largest budgetary expense for the tax payer, wants us to believe they don't have a better Ai than current industry? That is a double sword admission; either they are exposing themselves again as economic decision makers, or admitting they spend money on routine BS with zero frontier war fighting capabilities.

Either way, it is beyond time to reform the Military and remove the majority of its leadership as incompetent stewards and strategists. That doesn't even include the massive security vulnerabilities in our supply chains given military needs in various countries. (Taiwan and Thailand)

doodlebugging 2026-02-28 02:52 UTC link
The best way for AI companies to fight this would be to remind those who request this capability that the AI knows exactly where they live, where they hang out, and that any one of them can also be targeted by a rogue AI system with no human in the loop. Capabilities that they are requesting could jeopardize them, their personal assets, and their families if something goes haywire or, in the much more common case, where the AI is used as an attack tool by an outside adversary who has gained unauthorized access.

All of this should remain a bridge too far, forever.

EDIT: It is one level of bad when someone hacks a database containing personal healthcare data on most Americans as happened not long ago. A few years back, the OPM hack gave them all they needed to know about then-current and former government employees and service members and their families. Wait until a state-sponsored actor finds their way into the surveillance and targeting software and uses that back door to eliminate key adversarial personnel or to hold them hostage with threats against the things they value most so that the adversary builds a collection of moles who sell out everything in a vain attempt to keep themselves safe.

Of course we already know what happens when an adversary employs these techniques and that is why we are where we are right now.

culi 2026-02-28 02:57 UTC link
Before you leave a comment about how meaningless this is unless they do XYZ,

please realize that there's likely a group chat out there somewhere where all of these concerns have already been raised and considered. The best thing you can do is ask how you as an outsider can help support these organizers

lightyrs 2026-02-28 02:59 UTC link
» Have there been any mistakes in signature verification for this letter?

» We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.

conductr 2026-02-28 04:19 UTC link
You can’t be silly enough to build a product that enables things like mass surveillance to proliferate and then try to take a stance of “please don’t use it like that”. You invented a genie and let him out of the bottle.
5o1ecist 2026-02-28 04:59 UTC link
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

This is a trap. Two, I guess, but let's take the first one:

Domestic mass surveillance. Domestic.

Remember the eyes agreements: https://www.perplexity.ai/search/are-the-eyes-agreements-abo...

Expanding:

> These pacts enable member countries to share signals intelligence (SIGINT), including surveillance data gathered globally. Disclosures, notably from Edward Snowden in 2013, revealed that allies intentionally collect data on each other's citizens - bypassing domestic restrictions like the US ban on NSA spying on Americans - then exchange it.

Banning domestic mass surveillance is irrelevant.

The eyes-agreements allow them (respective participating countries) to share data with each other. Every country spies on every other country, with every country telling every other country what they have gathered.

This renders laws, which are preventing The State from spying on its own citizens, as irrelevant. They serve the purpose of being evidence of mass manipulation.

largbae 2026-02-28 05:30 UTC link
The signatories of this (letter, petition, whatever) are the same folks who profit from creating this Pandora's Box. If you don't want it opened, stop making it?
zahlman 2026-02-28 05:56 UTC link
Is there a particular reason why the actual letter content requires JavaScript to load while everything else is readable?
sourcegrift 2026-02-28 06:15 UTC link
Cute, I will also sign this since there are only upsides of Good optics and no downsides Let me know when any of them resigns after the companies do inevitably take the million dollar contracts.
voganmother42 2026-02-28 01:34 UTC link
Tech leaders are a joke
medi8r 2026-02-28 01:39 UTC link
Open Source here is not enough as hardware ownership matters. In an open source world, you and I cannot run the 10 trillion param model, but the data center controllers can.
msuniverse2026 2026-02-28 01:39 UTC link
I'd prefer something akin to the Biological Weapons Treaty which prohibits development, production and transfer. If you think it isn't possible you have to tell me why the bioweapons convention was successful and why it wouldn't be in the case of AI.
magicalist 2026-02-28 01:45 UTC link
> This is why you can't gatekeep AI capabilities.

What is why?

You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?

piskov 2026-02-28 01:47 UTC link
> I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has

And where would they emigrate? Russia? China? UAE? :-)

bottlepalm 2026-02-28 01:53 UTC link
What use are weights without the hardware to run them? That's the gate. Local AI right now is a toy in comparison.

Nukes are actually a great example of something also gated by resources. Just having the knowledge/plans isn't good enough.

jefftk 2026-02-28 02:03 UTC link
A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.
ok_dad 2026-02-28 02:13 UTC link
Sam Altman tells staff at an all-hands that OpenAI is negotiating a deal with the Pentagon, after Trump orders the end of Anthropic contracts - https://news.ycombinator.com/item?id=47188698
layer8 2026-02-28 02:17 UTC link
My understanding is that it’s about the contract allowing Anthropic to refuse service when they deem a red line has been crossed. Hegseth and friends probably don’t want any discussions to even start, about whether a red line may be in the process of being crossed, and having to answer to that. They don’t want the legality or ethicality of any operation to be under Anthropic’s purview at all.
timr 2026-02-28 02:22 UTC link
I don't see how public policy is being "forced" on anyone here? It seems like the system is working as intended: government wants to do X; company A says "I won't allow my product to be used for X"; government refuses to do business with company A. One side thinks the government should be allowed to dictate terms to a private supplier, the other side thinks the private supplier should be allowed to dictate terms to the government. Both are half right.

You can argue that the government refusing to do any business with company A is overreach, I suppose, but I imagine that the next logical escalation in this rhetorical slapfight is going to be the government saying "we cannot guarantee that any particular use will not include some version of X, and therefore we have to prevent working with this supplier"...which I sort of see?

Just to take the metaphor to absurdity, imagine that a maker of canned tomatoes decided to declare that their product cannot be used to "support a war on terror". Regardless of your feelings on wars on terror and/or canned tomatoes, the government would be entirely rational to avoid using that supplier.

yoyohello13 2026-02-28 02:23 UTC link
It’s about punishing a company that is not complying. It’s a show of force to deter any future objections on moral grounds from companies that want to do business with the US gov.
propagandist 2026-02-28 02:34 UTC link
Yeah, it's a nice gesture, but having watched Google handle the protests in recent years and their culture inching a step closer to Amazon, I do not foresee their leadership being swayed by employee resistance. They'll either quietly sign an agreement and discreetly implement it, or they will go scorched earth on their employees again.
danny_codes 2026-02-28 02:40 UTC link
It is a rough precedent that the government can force private citizens to build weapons for them.
moogly 2026-02-28 02:50 UTC link
If they're truly principled, and these are true red lines, given no other recourse, I would be impressed if Anthropic decided to shut down the company. Won't happen, but I would be smashing that F key if they did.

The other two definitely never would in a million years.

duped 2026-02-28 02:52 UTC link
> who are by far the largest budgetary expense for the tax payer

not even top 3

skeledrew 2026-02-28 02:54 UTC link
> Grok/X

Head(s) will of course agree with the administration. And employees will likely be making themselves a target if they sign this letter. All anonymous from said company is not a good look at all.

Speculation of course; let's see what really happens.

thimabi 2026-02-28 02:59 UTC link
> I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions

Although it would be nice to have some high-level signees there, I think we shouldn’t minimize the role of lay employees in this matter. Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.

culi 2026-02-28 03:00 UTC link
Actions like these often lead to unions. Look into the history of how the Kickstarter union came to be.

It often starts as collective action in response to a blatant disregard for the values of the workers

aguyonhackern 2026-02-28 03:00 UTC link
The US would not be net negative growth without government spending. Other components of GDP grow a lot, outside of recessions.

Sure if you immediately stopped government spending today we'd have negative growth today but that's not because other things aren't growing, it's because you just removed part of the base that existed last year. That would be true of literally pretty much any economy ever, or anything that's growing and you decided to remove a chunk of the base from.

And yes I absolutely believe the government does not have better generative AI than Anthropic or its competitors.

mkl 2026-02-28 03:12 UTC link
Altman says OpenAI agrees with Anthropic’s red lines in Pentagon dispute - https://news.ycombinator.com/item?id=47187488 - Feb 2026 (8 comments)
skeledrew 2026-02-28 03:23 UTC link
When you put it like that, it makes me almost want to wish for Anthropic to die from this. But the blow to the field in general would be huge, and I benefit from their service as well.
csomar 2026-02-28 03:24 UTC link
> The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid.

This is the case for every government/nation in the world. The difference between communism and capitalism, is that the Politburo in capitalism allows the natural selection of elites based on their performance on an open economy. At least that was the case until 2011.

jdadj 2026-02-28 03:25 UTC link
I don’t have any particular insights, but I’m curious to learn the antitrust implications of how the execs can/cannot coordinate.
daxfohl 2026-02-28 03:28 UTC link
Or just reincorporate in Finland or something. If the US is going to be this hostile to business, time to gtfo.
abustamam 2026-02-28 03:33 UTC link
I think it's an important call-out though. Can never be too safe in this landscape.
elAhmo 2026-02-28 03:53 UTC link
So much for the hope with leaders such as Sam and Dario
ProllyInfamous 2026-02-28 04:01 UTC link
Instead of Epsteins blackmailing disgustful human nature, it'll be rogue AIs sending selective blackmail, 24/7, to the spiteful among us (e.g. to motivate targeted killings, either by human or machine).

>All of this should remain a bridge too far, forever.

Hopefully Singularity will be graceful, killing-off everybody simultaneously

#PaperclipMaximizer #HimFirst

autoexec 2026-02-28 04:05 UTC link
The best way for government to fight that would be to remind those who refuse to comply with their demands that the government already knows exactly where they live, where they hang out, and that any one of them can also be targeted by a three letter agency or thrown into Guantánamo Bay. The government has been building and maintaining massive dossiers on everyone. They already have the ability to plant or fabricate whatever incriminating evidence they want. They already have the capability to jeopardize anyone, their personal assets, and their families and all of that could be turned against them if something goes haywire or where an outside adversary gains unauthorized access. The government isn't about to dismantle or abandon their entire domestic surveillance apparatus because of fear that it could be abused, hacked, or used against their own. Those are well known and accepted risks. AI is just one more risk they can't resist taking.
gnarlouse 2026-02-28 04:11 UTC link
Mankind is doing what it does best at scale: sprinting mindlessly into problematic scenarios because the species is fragmented and has arbitrarily established concepts of groups defined by region, race, ideology, etc.

As a species, this is just natural selection.

jalapenos 2026-02-28 05:06 UTC link
I don't think people get to those positions by having firm principles
Editorial Channel
What the content says
+0.75
Article 19 Freedom of Expression
High Advocacy Framing Practice
Editorial
+0.75
SETL
0.00

The petition is explicitly a vehicle for employee free expression about AI policy. Site invites opinions with diverse perspectives, creating space for 'holding and expressing' views about a contested policy. Core mission is facilitating expression.

+0.70
Article 20 Assembly & Association
High Advocacy Practice
Editorial
+0.70
SETL
0.00

The open letter is fundamentally a peaceful assembly of employees organizing around shared concern about AI policy. Invitation to 'sign' is invitation to peaceful collective action.

+0.60
Article 12 Privacy
High Advocacy Practice
Editorial
+0.60
SETL
-0.34

FAQ explicitly prioritizes privacy: 'your personal information (name, email) is automatically and permanently deleted from our database within 24 hours of verification.' Clear advocacy for personal privacy protection.

+0.45
Article 18 Freedom of Thought
Medium Advocacy Framing
Editorial
+0.45
SETL
+0.15

FAQ explicitly states: 'The goal of this letter is to find common ground. The signatories likely have a diverse set of views. Signing this letter doesn't mean you think it's the only thing that needs to be done, just that you agree with the bottom line.' This frames the petition as respecting diverse conscience.

+0.35
Article 21 Political Participation
Medium Advocacy Framing
Editorial
+0.35
SETL
+0.13

The letter concerns 'potential misuse of AI' policy (DoW), framing it as a government/institutional policy issue. Employee petition to influence policy aligns with political participation. However, specific policy positions are not detailed on visible page.

+0.20
Preamble Preamble
Medium Advocacy
Editorial
+0.20
SETL
-0.17

The act of organizing employees to collectively voice concerns about AI policy reflects underlying commitment to human dignity and rights protection, though not explicit in language.

ND
Article 1 Freedom, Equality, Brotherhood
Medium Practice

Letter content not directly visible on evaluated page. Observable values from form design.

ND
Article 2 Non-Discrimination
Medium Practice

Letter content not visible. Inferred from sign-up structure.

ND
Article 3 Life, Liberty, Security

Not directly addressed.

ND
Article 4 No Slavery

Not addressed.

ND
Article 5 No Torture

Not addressed.

ND
Article 6 Legal Personhood
Medium Practice

Not directly addressed.

ND
Article 7 Equality Before Law

Not addressed.

ND
Article 8 Right to Remedy

Not addressed.

ND
Article 9 No Arbitrary Detention

Not addressed.

ND
Article 10 Fair Hearing

Not addressed.

ND
Article 11 Presumption of Innocence

Not addressed.

ND
Article 13 Freedom of Movement

Not addressed.

ND
Article 14 Asylum

Not addressed.

ND
Article 15 Nationality

Not addressed.

ND
Article 16 Marriage & Family

Not addressed.

ND
Article 17 Property

Not addressed.

ND
Article 22 Social Security

Not directly addressed.

ND
Article 23 Work & Equal Pay

While signatories are employees organizing, no explicit workplace rights, wages, or union claims are made.

ND
Article 24 Rest & Leisure

Not addressed.

ND
Article 25 Standard of Living

Not addressed.

ND
Article 26 Education

Not addressed.

ND
Article 27 Cultural Participation

Not addressed.

ND
Article 28 Social & International Order

Not addressed.

ND
Article 29 Duties to Community

Not addressed.

ND
Article 30 No Destruction of Rights

Not addressed.

Structural Channel
What the site does
+0.75
Article 12 Privacy
High Advocacy Practice
Structural
+0.75
Context Modifier
ND
SETL
-0.34

Platform implements strong privacy protections: encrypted SQLite database, no analytics/tracking, automatic data deletion, anonymous signing options, no email-in-inbox option (Google Form verification), minimal data retention. Host (Fly.io) provides encrypted persistent volume storage.

+0.75
Article 19 Freedom of Expression
High Advocacy Framing Practice
Structural
+0.75
Context Modifier
ND
SETL
0.00

Signature platform, anonymous options, verified-only access (preventing bot/spam but protecting legitimate speakers), and public publication of opinions all structurally enable Article 19. No censorship, moderation, or opinion-filtering observed.

+0.70
Article 20 Assembly & Association
High Advocacy Practice
Structural
+0.70
Context Modifier
ND
SETL
0.00

Signature aggregation platform is infrastructure for peaceful assembly and association. Verified employee-only restriction ensures authentic employee membership (integrity control for association), not suppression.

+0.40
Article 18 Freedom of Thought
Medium Advocacy Framing
Structural
+0.40
Context Modifier
ND
SETL
+0.15

Anonymous options allow signatories to participate without exposing conscience/beliefs to employer retaliation. Role/title disclosure is optional and asymmetric (public even if signing anonymously) allowing conscience expression without full identity exposure.

+0.30
Preamble Preamble
Medium Advocacy
Structural
+0.30
Context Modifier
ND
SETL
-0.17

Platform infrastructure prioritizes personal dignity via anonymous options, data minimization (24h deletion), and no tracking. Privacy-protective design aligns with dignity principles.

+0.30
Article 21 Political Participation
Medium Advocacy Framing
Structural
+0.30
Context Modifier
ND
SETL
+0.13

Petition is a form of political voice: employees communicating to policymakers/public about AI governance. Independent organization (no party/corporate affiliation) enables authentic political participation.

ND
Article 1 Freedom, Equality, Brotherhood
Medium Practice

Anonymous option and 'current/former' employee categories treat all participants equally regardless of employment status or hierarchy, supporting equal dignity principle.

ND
Article 2 Non-Discrimination
Medium Practice

Platform accepts signatories regardless of employment status, role, company affiliation (Google or OpenAI), or willingness to identify—no discrimination observable in access rules.

ND
Article 3 Life, Liberty, Security

Not addressed by visible site structure.

ND
Article 4 No Slavery

Not addressed.

ND
Article 5 No Torture

Not addressed.

ND
Article 6 Legal Personhood
Medium Practice

Verification system confirms identity/personhood through email, badge photo, or co-signer vouching—respects individuals as recognized persons with verifiable status.

ND
Article 7 Equality Before Law

Not addressed.

ND
Article 8 Right to Remedy

Not addressed.

ND
Article 9 No Arbitrary Detention

Not addressed.

ND
Article 10 Fair Hearing

Not addressed.

ND
Article 11 Presumption of Innocence

Not addressed.

ND
Article 13 Freedom of Movement

Not addressed.

ND
Article 14 Asylum

Not addressed.

ND
Article 15 Nationality

Not addressed.

ND
Article 16 Marriage & Family

Not addressed.

ND
Article 17 Property

Not addressed.

ND
Article 22 Social Security

Not addressed.

ND
Article 23 Work & Equal Pay

Not addressed.

ND
Article 24 Rest & Leisure

Not addressed.

ND
Article 25 Standard of Living

Not addressed.

ND
Article 26 Education

Not addressed.

ND
Article 27 Cultural Participation

Not addressed.

ND
Article 28 Social & International Order

Not addressed.

ND
Article 29 Duties to Community

Not addressed.

ND
Article 30 No Destruction of Rights

Not addressed.

Supplementary Signals
Epistemic Quality
0.47 high claims
Sources
0.3
Evidence
0.3
Uncertainty
0.6
Purpose
0.9
Propaganda Flags
2 techniques detected
causal oversimplification
FAQ states: 'The current situation with the DoW is so clear-cut that it can bring together a very broad coalition.' Presents complex AI policy as binary/obvious without supporting evidence.
appeal to fear
FAQ mentions 'potential misuse of AI against Americans' without detailing specific instances or risks, creating concern based on possibility rather than demonstrated harm.
Solution Orientation
0.72 solution oriented
Reader Agency
0.8
Emotional Tone
measured
Valence
-0.1
Arousal
0.5
Dominance
0.3
Stakeholder Voice
0.45 2 perspectives
Speaks: individualsworkers
About: corporationgovernment
Temporal Framing
present immediate
Geographic Scope
national
United States
Complexity
accessible low jargon general
Transparency
0.65
✗ Author ✓ Conflicts ✓ Funding
Audit Trail 46 entries
2026-02-28 07:42 eval Evaluated by claude-haiku-4-5-20251001: +0.55 (Moderate positive)
2026-02-28 07:24 eval_success Light evaluated: Mild positive (0.10) - -
2026-02-28 07:24 rater_validation_warn Light validation warnings for model llama-3.3-70b-wai: 0W 1R - -
2026-02-28 07:24 eval Evaluated by llama-3.3-70b-wai: +0.10 (Mild positive) -0.10
2026-02-28 07:22 rater_validation_warn Light validation warnings for model llama-4-scout-wai: 0W 1R - -
2026-02-28 07:22 eval_success Light evaluated: Mild positive (0.24) - -
2026-02-28 07:22 eval Evaluated by llama-4-scout-wai: +0.24 (Mild positive) 0.00
2026-02-28 07:20 eval_success Light evaluated: Mild positive (0.20) - -
2026-02-28 07:20 eval Evaluated by llama-3.3-70b-wai: +0.20 (Mild positive) 0.00
2026-02-28 07:20 rater_validation_warn Light validation warnings for model llama-3.3-70b-wai: 0W 1R - -
2026-02-28 06:18 eval_success Light evaluated: Mild positive (0.24) - -
2026-02-28 06:18 eval Evaluated by llama-4-scout-wai: +0.24 (Mild positive) 0.00
2026-02-28 06:18 rater_validation_warn Light validation warnings for model llama-4-scout-wai: 0W 1R - -
2026-02-28 06:15 eval_success Light evaluated: Mild positive (0.20) - -
2026-02-28 06:15 rater_validation_warn Light validation warnings for model llama-3.3-70b-wai: 0W 1R - -
2026-02-28 06:15 eval Evaluated by llama-3.3-70b-wai: +0.20 (Mild positive) 0.00
2026-02-28 06:14 eval_success Light evaluated: Mild positive (0.20) - -
2026-02-28 06:14 eval Evaluated by llama-3.3-70b-wai: +0.20 (Mild positive) 0.00
2026-02-28 06:14 rater_validation_warn Light validation warnings for model llama-3.3-70b-wai: 0W 1R - -
2026-02-28 05:51 eval_success Light evaluated: Mild positive (0.20) - -
2026-02-28 05:51 eval Evaluated by llama-3.3-70b-wai: +0.20 (Mild positive) -0.30
2026-02-28 05:51 rater_validation_warn Light validation warnings for model llama-3.3-70b-wai: 0W 1R - -
2026-02-28 05:29 eval_success Light evaluated: Mild positive (0.24) - -
2026-02-28 05:29 eval Evaluated by llama-4-scout-wai: +0.24 (Mild positive) 0.00
2026-02-28 05:29 rater_validation_warn Light validation warnings for model llama-4-scout-wai: 0W 1R - -
2026-02-28 05:28 eval_success Light evaluated: Mild positive (0.24) - -
2026-02-28 05:28 rater_validation_warn Light validation warnings for model llama-4-scout-wai: 0W 1R - -
2026-02-28 05:28 eval Evaluated by llama-4-scout-wai: +0.24 (Mild positive) -0.56
2026-02-28 04:53 eval_success Evaluated: Moderate positive (0.33) - -
2026-02-28 04:53 eval Evaluated by deepseek-v3.2: +0.33 (Moderate positive) 11,927 tokens -0.11
2026-02-28 04:48 eval_success Light evaluated: Moderate positive (0.50) - -
2026-02-28 04:48 eval Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 04:44 eval Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 04:42 eval Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 04:28 eval Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 04:28 eval Evaluated by deepseek-v3.2: +0.44 (Moderate positive) 10,591 tokens +0.03
2026-02-28 04:07 eval Evaluated by llama-4-scout-wai: +0.80 (Strong positive) 0.00
2026-02-28 03:49 eval Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 03:27 eval Evaluated by llama-4-scout-wai: +0.80 (Strong positive) 0.00
2026-02-28 03:26 eval Evaluated by llama-4-scout-wai: +0.80 (Strong positive) 0.00
2026-02-28 03:17 eval Evaluated by deepseek-v3.2: +0.41 (Moderate positive) 11,613 tokens
2026-02-28 02:56 eval Evaluated by llama-4-scout-wai: +0.80 (Strong positive) 0.00
2026-02-28 02:03 eval Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 01:59 eval Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive)
2026-02-28 01:16 eval Evaluated by llama-4-scout-wai: +0.80 (Strong positive) 0.00
2026-02-28 01:11 eval Evaluated by llama-4-scout-wai: +0.80 (Strong positive)