This site hosts an employee-organized open letter opposing an unnamed AI policy (DoW) through a verified-signature platform emphasizing collective expression and democratic participation. The platform strongly advocates for Articles 19-20 (free expression, assembly, association) while implementing robust privacy protections under Article 12, including anonymous signing and 24-hour data deletion. Overall, the site champions employee voice and rights protection in AI governance decisions.
"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.
It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.
The problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.
I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.
I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.
Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.
Among other consequences, if Anthropic ends up being killed it’s going to be just another nail in the coffin of trust in America.
Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.
I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.
I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.
It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
Why are the signing employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?
Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given both employers probably monitor anything you do on your devices, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you wouldn't ask for the email link, you might as well use the alternate verification option.
Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.
P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.
The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid. Without government spending, employment, and contracts, the USA would be net negative growth.
Now the DoD, who are by far the largest budgetary expense for the tax payer, wants us to believe they don't have a better Ai than current industry? That is a double sword admission; either they are exposing themselves again as economic decision makers, or admitting they spend money on routine BS with zero frontier war fighting capabilities.
Either way, it is beyond time to reform the Military and remove the majority of its leadership as incompetent stewards and strategists. That doesn't even include the massive security vulnerabilities in our supply chains given military needs in various countries. (Taiwan and Thailand)
The best way for AI companies to fight this would be to remind those who request this capability that the AI knows exactly where they live, where they hang out, and that any one of them can also be targeted by a rogue AI system with no human in the loop. Capabilities that they are requesting could jeopardize them, their personal assets, and their families if something goes haywire or, in the much more common case, where the AI is used as an attack tool by an outside adversary who has gained unauthorized access.
All of this should remain a bridge too far, forever.
EDIT: It is one level of bad when someone hacks a database containing personal healthcare data on most Americans as happened not long ago. A few years back, the OPM hack gave them all they needed to know about then-current and former government employees and service members and their families. Wait until a state-sponsored actor finds their way into the surveillance and targeting software and uses that back door to eliminate key adversarial personnel or to hold them hostage with threats against the things they value most so that the adversary builds a collection of moles who sell out everything in a vain attempt to keep themselves safe.
Of course we already know what happens when an adversary employs these techniques and that is why we are where we are right now.
Before you leave a comment about how meaningless this is unless they do XYZ,
please realize that there's likely a group chat out there somewhere where all of these concerns have already been raised and considered. The best thing you can do is ask how you as an outsider can help support these organizers
» Have there been any mistakes in signature verification for this letter?
» We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.
You can’t be silly enough to build a product that enables things like mass surveillance to proliferate and then try to take a stance of “please don’t use it like that”. You invented a genie and let him out of the bottle.
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
This is a trap. Two, I guess, but let's take the first one:
> These pacts enable member countries to share signals intelligence (SIGINT), including surveillance data gathered globally. Disclosures, notably from Edward Snowden in 2013, revealed that allies intentionally collect data on each other's citizens - bypassing domestic restrictions like the US ban on NSA spying on Americans - then exchange it.
Banning domestic mass surveillance is irrelevant.
The eyes-agreements allow them (respective participating countries) to share data with each other. Every country spies on every other country, with every country telling every other country what they have gathered.
This renders laws, which are preventing The State from spying on its own citizens, as irrelevant. They serve the purpose of being evidence of mass manipulation.
The signatories of this (letter, petition, whatever) are the same folks who profit from creating this Pandora's Box. If you don't want it opened, stop making it?
Cute, I will also sign this since there are only upsides of Good optics and no downsides Let me know when any of them resigns after the companies do inevitably take the million dollar contracts.
Open Source here is not enough as hardware ownership matters. In an open source world, you and I cannot run the 10 trillion param model, but the data center controllers can.
I'd prefer something akin to the Biological Weapons Treaty which prohibits development, production and transfer. If you think it isn't possible you have to tell me why the bioweapons convention was successful and why it wouldn't be in the case of AI.
You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?
A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.
Sam Altman tells staff at an all-hands that OpenAI is negotiating a deal with the Pentagon, after Trump orders the end of Anthropic contracts - https://news.ycombinator.com/item?id=47188698
My understanding is that it’s about the contract allowing Anthropic to refuse service when they deem a red line has been crossed. Hegseth and friends probably don’t want any discussions to even start, about whether a red line may be in the process of being crossed,
and having to answer to that. They don’t want the legality or ethicality of any operation to be under Anthropic’s purview at all.
I don't see how public policy is being "forced" on anyone here? It seems like the system is working as intended: government wants to do X; company A says "I won't allow my product to be used for X"; government refuses to do business with company A. One side thinks the government should be allowed to dictate terms to a private supplier, the other side thinks the private supplier should be allowed to dictate terms to the government. Both are half right.
You can argue that the government refusing to do any business with company A is overreach, I suppose, but I imagine that the next logical escalation in this rhetorical slapfight is going to be the government saying "we cannot guarantee that any particular use will not include some version of X, and therefore we have to prevent working with this supplier"...which I sort of see?
Just to take the metaphor to absurdity, imagine that a maker of canned tomatoes decided to declare that their product cannot be used to "support a war on terror". Regardless of your feelings on wars on terror and/or canned tomatoes, the government would be entirely rational to avoid using that supplier.
It’s about punishing a company that is not complying. It’s a show of force to deter any future objections on moral grounds from companies that want to do business with the US gov.
Yeah, it's a nice gesture, but having watched Google handle the protests in recent years and their culture inching a step closer to Amazon, I do not foresee their leadership being swayed by employee resistance. They'll either quietly sign an agreement and discreetly implement it, or they will go scorched earth on their employees again.
If they're truly principled, and these are true red lines, given no other recourse, I would be impressed if Anthropic decided to shut down the company. Won't happen, but I would be smashing that F key if they did.
The other two definitely never would in a million years.
Head(s) will of course agree with the administration. And employees will likely be making themselves a target if they sign this letter. All anonymous from said company is not a good look at all.
Speculation of course; let's see what really happens.
> I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions
Although it would be nice to have some high-level signees there, I think we shouldn’t minimize the role of lay employees in this matter. Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.
The US would not be net negative growth without government spending. Other components of GDP grow a lot, outside of recessions.
Sure if you immediately stopped government spending today we'd have negative growth today but that's not because other things aren't growing, it's because you just removed part of the base that existed last year. That would be true of literally pretty much any economy ever, or anything that's growing and you decided to remove a chunk of the base from.
And yes I absolutely believe the government does not have better generative AI than Anthropic or its competitors.
When you put it like that, it makes me almost want to wish for Anthropic to die from this. But the blow to the field in general would be huge, and I benefit from their service as well.
> The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid.
This is the case for every government/nation in the world. The difference between communism and capitalism, is that the Politburo in capitalism allows the natural selection of elites based on their performance on an open economy. At least that was the case until 2011.
Instead of Epsteins blackmailing disgustful human nature, it'll be rogue AIs sending selective blackmail, 24/7, to the spiteful among us (e.g. to motivate targeted killings, either by human or machine).
>All of this should remain a bridge too far, forever.
Hopefully Singularity will be graceful, killing-off everybody simultaneously
The best way for government to fight that would be to remind those who refuse to comply with their demands that the government already knows exactly where they live, where they hang out, and that any one of them can also be targeted by a three letter agency or thrown into Guantánamo Bay. The government has been building and maintaining massive dossiers on everyone. They already have the ability to plant or fabricate whatever incriminating evidence they want. They already have the capability to jeopardize anyone, their personal assets, and their families and all of that could be turned against them if something goes haywire or where an outside adversary gains unauthorized access. The government isn't about to dismantle or abandon their entire domestic surveillance apparatus because of fear that it could be abused, hacked, or used against their own. Those are well known and accepted risks. AI is just one more risk they can't resist taking.
Mankind is doing what it does best at scale: sprinting mindlessly into problematic scenarios because the species is fragmented and has arbitrarily established concepts of groups defined by region, race, ideology, etc.
I don't think people get to those positions by having firm principles
Editorial Channel
What the content says
+0.75
Article 19Freedom of Expression
High Advocacy Framing Practice
Editorial
+0.75
SETL
0.00
The petition is explicitly a vehicle for employee free expression about AI policy. Site invites opinions with diverse perspectives, creating space for 'holding and expressing' views about a contested policy. Core mission is facilitating expression.
FW Ratio: 63%
Observable Facts
Site title and purpose: 'We Will Not Be Divided Open Letter' for collecting signatures.
Form invites: 'Current and former employees of Google and OpenAI are invited to sign.'
Anonymous option preserves expression without identity risk.
Public signatories list shows opinions are published and visible.
FAQ acknowledges diverse views: 'The signatories likely have a diverse set of views.'
Inferences
Petition platform directly facilitates employees' right to hold and express opinions on matters of public concern.
Verification system protects authentic expression while preventing impersonation.
Anonymous + named options maximize inclusive participation in free expression.
+0.70
Article 20Assembly & Association
High Advocacy Practice
Editorial
+0.70
SETL
0.00
The open letter is fundamentally a peaceful assembly of employees organizing around shared concern about AI policy. Invitation to 'sign' is invitation to peaceful collective action.
FW Ratio: 67%
Observable Facts
Form explicitly invites 'Current and former employees of Google and OpenAI' to join collective action.
Signatures are aggregated and published, demonstrating collective association.
Anonymous option protects associational privacy from employer surveillance.
Site FAQ: 'This letter was organized by a few citizens who are concerned' shows citizen-led association formation.
Inferences
Petition platform instantiates a form of peaceful assembly through signature collection.
Verification requirement protects the association's authenticity without restricting peaceful assembly.
+0.60
Article 12Privacy
High Advocacy Practice
Editorial
+0.60
SETL
-0.34
FAQ explicitly prioritizes privacy: 'your personal information (name, email) is automatically and permanently deleted from our database within 24 hours of verification.' Clear advocacy for personal privacy protection.
FW Ratio: 63%
Observable Facts
FAQ states: 'If you sign anonymously, your personal information (name, email) is automatically and permanently deleted from our database within 24 hours of verification.'
Site explicitly discloses: 'No analytics or tracking scripts are used.'
Database is SQLite on encrypted persistent volume (Fly.io infrastructure).
Verification link method notes: 'the email will be visible in your inbox,' contrasted with Google Form method that 'verifies email without sending anything to your inbox.'
FAQ confirms: 'Only one organizer has access to review anonymous signatures during that 24-hour window.'
Inferences
24-hour deletion and limited reviewer access minimize surveillance and retaliation risk.
Encrypted storage and minimal retention align with international data protection standards.
+0.45
Article 18Freedom of Thought
Medium Advocacy Framing
Editorial
+0.45
SETL
+0.15
FAQ explicitly states: 'The goal of this letter is to find common ground. The signatories likely have a diverse set of views. Signing this letter doesn't mean you think it's the only thing that needs to be done, just that you agree with the bottom line.' This frames the petition as respecting diverse conscience.
FW Ratio: 60%
Observable Facts
FAQ states: 'The signatories likely have a diverse set of views.'
FAQ confirms: 'Signing this letter doesn't mean you think it's the only thing that needs to be done, just that you agree with the bottom line.'
Form allows signing with or without name, and independently allows choosing whether to display role/title.
Inferences
Platform acknowledges and accommodates diverse moral viewpoints.
Anonymous option protects conscience from workplace pressure.
+0.35
Article 21Political Participation
Medium Advocacy Framing
Editorial
+0.35
SETL
+0.13
The letter concerns 'potential misuse of AI' policy (DoW), framing it as a government/institutional policy issue. Employee petition to influence policy aligns with political participation. However, specific policy positions are not detailed on visible page.
FW Ratio: 60%
Observable Facts
FAQ states: 'This letter was organized by a few citizens who are concerned about the potential misuse of AI against Americans.'
Organizers claim independence: 'We are not affiliated with any political party, advocacy group, or organization. We are not affiliated with any AI company and are not paid.'
The petition targets policy concern (implied government/institutional policy decision).
Inferences
Independent organization and employee collective action constitute a form of democratic participation.
Petition to address AI policy concern aligns with Article 21 right to participate in governance.
+0.20
PreamblePreamble
Medium Advocacy
Editorial
+0.20
SETL
-0.17
The act of organizing employees to collectively voice concerns about AI policy reflects underlying commitment to human dignity and rights protection, though not explicit in language.
FW Ratio: 60%
Observable Facts
The site allows anonymous signature submission with optional name/role entry.
Personal data (name, email) is stated to be automatically deleted within 24 hours of verification.
The FAQ states: 'No analytics or tracking scripts are used.'
Inferences
The design choices suggest organizational commitment to protecting human dignity of signatories.
Privacy protections reduce retaliation risk, supporting dignity in expression.
ND
Article 1Freedom, Equality, Brotherhood
Medium Practice
Letter content not directly visible on evaluated page. Observable values from form design.
FW Ratio: 67%
Observable Facts
Form offers anonymous submission alongside named submission.
Both current and former employees are invited to sign with equal standing.
Inferences
Equal treatment in submission options suggests non-hierarchical dignity framework.
ND
Article 2Non-Discrimination
Medium Practice
Letter content not visible. Inferred from sign-up structure.
FW Ratio: 75%
Observable Facts
Form explicitly accepts both 'Google' and 'OpenAI' employees.
Both 'Current employee' and 'Former employee' statuses are eligible.
Inclusive eligibility criteria minimize grounds for discrimination.
ND
Article 3Life, Liberty, Security
Not directly addressed.
ND
Article 4No Slavery
Not addressed.
ND
Article 5No Torture
Not addressed.
ND
Article 6Legal Personhood
Medium Practice
Not directly addressed.
FW Ratio: 67%
Observable Facts
Verification methods include work email access, badge upload, or co-signer confirmation.
System requires proof of actual employment status before publishing signature.
Inferences
Verification process affirms signatories' actual personhood and social status as employees.
ND
Article 7Equality Before Law
Not addressed.
ND
Article 8Right to Remedy
Not addressed.
ND
Article 9No Arbitrary Detention
Not addressed.
ND
Article 10Fair Hearing
Not addressed.
ND
Article 11Presumption of Innocence
Not addressed.
ND
Article 13Freedom of Movement
Not addressed.
ND
Article 14Asylum
Not addressed.
ND
Article 15Nationality
Not addressed.
ND
Article 16Marriage & Family
Not addressed.
ND
Article 17Property
Not addressed.
ND
Article 22Social Security
Not directly addressed.
ND
Article 23Work & Equal Pay
While signatories are employees organizing, no explicit workplace rights, wages, or union claims are made.
ND
Article 24Rest & Leisure
Not addressed.
ND
Article 25Standard of Living
Not addressed.
ND
Article 26Education
Not addressed.
ND
Article 27Cultural Participation
Not addressed.
ND
Article 28Social & International Order
Not addressed.
ND
Article 29Duties to Community
Not addressed.
ND
Article 30No Destruction of Rights
Not addressed.
Structural Channel
What the site does
+0.75
Article 12Privacy
High Advocacy Practice
Structural
+0.75
Context Modifier
ND
SETL
-0.34
Platform implements strong privacy protections: encrypted SQLite database, no analytics/tracking, automatic data deletion, anonymous signing options, no email-in-inbox option (Google Form verification), minimal data retention. Host (Fly.io) provides encrypted persistent volume storage.
+0.75
Article 19Freedom of Expression
High Advocacy Framing Practice
Structural
+0.75
Context Modifier
ND
SETL
0.00
Signature platform, anonymous options, verified-only access (preventing bot/spam but protecting legitimate speakers), and public publication of opinions all structurally enable Article 19. No censorship, moderation, or opinion-filtering observed.
+0.70
Article 20Assembly & Association
High Advocacy Practice
Structural
+0.70
Context Modifier
ND
SETL
0.00
Signature aggregation platform is infrastructure for peaceful assembly and association. Verified employee-only restriction ensures authentic employee membership (integrity control for association), not suppression.
+0.40
Article 18Freedom of Thought
Medium Advocacy Framing
Structural
+0.40
Context Modifier
ND
SETL
+0.15
Anonymous options allow signatories to participate without exposing conscience/beliefs to employer retaliation. Role/title disclosure is optional and asymmetric (public even if signing anonymously) allowing conscience expression without full identity exposure.
+0.30
PreamblePreamble
Medium Advocacy
Structural
+0.30
Context Modifier
ND
SETL
-0.17
Platform infrastructure prioritizes personal dignity via anonymous options, data minimization (24h deletion), and no tracking. Privacy-protective design aligns with dignity principles.
+0.30
Article 21Political Participation
Medium Advocacy Framing
Structural
+0.30
Context Modifier
ND
SETL
+0.13
Petition is a form of political voice: employees communicating to policymakers/public about AI governance. Independent organization (no party/corporate affiliation) enables authentic political participation.
ND
Article 1Freedom, Equality, Brotherhood
Medium Practice
Anonymous option and 'current/former' employee categories treat all participants equally regardless of employment status or hierarchy, supporting equal dignity principle.
ND
Article 2Non-Discrimination
Medium Practice
Platform accepts signatories regardless of employment status, role, company affiliation (Google or OpenAI), or willingness to identify—no discrimination observable in access rules.
ND
Article 3Life, Liberty, Security
Not addressed by visible site structure.
ND
Article 4No Slavery
Not addressed.
ND
Article 5No Torture
Not addressed.
ND
Article 6Legal Personhood
Medium Practice
Verification system confirms identity/personhood through email, badge photo, or co-signer vouching—respects individuals as recognized persons with verifiable status.
ND
Article 7Equality Before Law
Not addressed.
ND
Article 8Right to Remedy
Not addressed.
ND
Article 9No Arbitrary Detention
Not addressed.
ND
Article 10Fair Hearing
Not addressed.
ND
Article 11Presumption of Innocence
Not addressed.
ND
Article 13Freedom of Movement
Not addressed.
ND
Article 14Asylum
Not addressed.
ND
Article 15Nationality
Not addressed.
ND
Article 16Marriage & Family
Not addressed.
ND
Article 17Property
Not addressed.
ND
Article 22Social Security
Not addressed.
ND
Article 23Work & Equal Pay
Not addressed.
ND
Article 24Rest & Leisure
Not addressed.
ND
Article 25Standard of Living
Not addressed.
ND
Article 26Education
Not addressed.
ND
Article 27Cultural Participation
Not addressed.
ND
Article 28Social & International Order
Not addressed.
ND
Article 29Duties to Community
Not addressed.
ND
Article 30No Destruction of Rights
Not addressed.
Supplementary Signals
Epistemic Quality
0.47high claims
Sources
0.3
Evidence
0.3
Uncertainty
0.6
Purpose
0.9
Propaganda Flags
2techniques detected
causal oversimplification
FAQ states: 'The current situation with the DoW is so clear-cut that it can bring together a very broad coalition.' Presents complex AI policy as binary/obvious without supporting evidence.
appeal to fear
FAQ mentions 'potential misuse of AI against Americans' without detailing specific instances or risks, creating concern based on possibility rather than demonstrated harm.
Solution Orientation
0.72solution oriented
Reader Agency
0.8
Emotional Tone
measured
Valence
-0.1
Arousal
0.5
Dominance
0.3
Stakeholder Voice
0.452 perspectives
Speaks: individualsworkers
About: corporationgovernment
Temporal Framing
presentimmediate
Geographic Scope
national
United States
Complexity
accessiblelow jargongeneral
Transparency
0.65
✗ Author✓ Conflicts✓ Funding
Audit Trail
46 entries
2026-02-28 07:42
eval
Evaluated by claude-haiku-4-5-20251001: +0.55 (Moderate positive)
2026-02-28 07:24
eval_success
Light evaluated: Mild positive (0.10)
--
2026-02-28 07:24
rater_validation_warn
Light validation warnings for model llama-3.3-70b-wai: 0W 1R
--
2026-02-28 07:24
eval
Evaluated by llama-3.3-70b-wai: +0.10 (Mild positive) -0.10
2026-02-28 07:22
rater_validation_warn
Light validation warnings for model llama-4-scout-wai: 0W 1R
--
2026-02-28 07:22
eval_success
Light evaluated: Mild positive (0.24)
--
2026-02-28 07:22
eval
Evaluated by llama-4-scout-wai: +0.24 (Mild positive) 0.00
2026-02-28 07:20
eval_success
Light evaluated: Mild positive (0.20)
--
2026-02-28 07:20
eval
Evaluated by llama-3.3-70b-wai: +0.20 (Mild positive) 0.00
2026-02-28 07:20
rater_validation_warn
Light validation warnings for model llama-3.3-70b-wai: 0W 1R
--
2026-02-28 06:18
eval_success
Light evaluated: Mild positive (0.24)
--
2026-02-28 06:18
eval
Evaluated by llama-4-scout-wai: +0.24 (Mild positive) 0.00
2026-02-28 06:18
rater_validation_warn
Light validation warnings for model llama-4-scout-wai: 0W 1R
--
2026-02-28 06:15
eval_success
Light evaluated: Mild positive (0.20)
--
2026-02-28 06:15
rater_validation_warn
Light validation warnings for model llama-3.3-70b-wai: 0W 1R
--
2026-02-28 06:15
eval
Evaluated by llama-3.3-70b-wai: +0.20 (Mild positive) 0.00
2026-02-28 06:14
eval_success
Light evaluated: Mild positive (0.20)
--
2026-02-28 06:14
eval
Evaluated by llama-3.3-70b-wai: +0.20 (Mild positive) 0.00
2026-02-28 06:14
rater_validation_warn
Light validation warnings for model llama-3.3-70b-wai: 0W 1R
--
2026-02-28 05:51
eval_success
Light evaluated: Mild positive (0.20)
--
2026-02-28 05:51
eval
Evaluated by llama-3.3-70b-wai: +0.20 (Mild positive) -0.30
2026-02-28 05:51
rater_validation_warn
Light validation warnings for model llama-3.3-70b-wai: 0W 1R
--
2026-02-28 05:29
eval_success
Light evaluated: Mild positive (0.24)
--
2026-02-28 05:29
eval
Evaluated by llama-4-scout-wai: +0.24 (Mild positive) 0.00
2026-02-28 05:29
rater_validation_warn
Light validation warnings for model llama-4-scout-wai: 0W 1R
--
2026-02-28 05:28
eval_success
Light evaluated: Mild positive (0.24)
--
2026-02-28 05:28
rater_validation_warn
Light validation warnings for model llama-4-scout-wai: 0W 1R
--
2026-02-28 05:28
eval
Evaluated by llama-4-scout-wai: +0.24 (Mild positive) -0.56
2026-02-28 04:53
eval_success
Evaluated: Moderate positive (0.33)
--
2026-02-28 04:53
eval
Evaluated by deepseek-v3.2: +0.33 (Moderate positive) 11,927 tokens-0.11
2026-02-28 04:48
eval_success
Light evaluated: Moderate positive (0.50)
--
2026-02-28 04:48
eval
Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 04:44
eval
Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 04:42
eval
Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 04:28
eval
Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 04:28
eval
Evaluated by deepseek-v3.2: +0.44 (Moderate positive) 10,591 tokens+0.03
2026-02-28 04:07
eval
Evaluated by llama-4-scout-wai: +0.80 (Strong positive) 0.00
2026-02-28 03:49
eval
Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 03:27
eval
Evaluated by llama-4-scout-wai: +0.80 (Strong positive) 0.00
2026-02-28 03:26
eval
Evaluated by llama-4-scout-wai: +0.80 (Strong positive) 0.00
2026-02-28 03:17
eval
Evaluated by deepseek-v3.2: +0.41 (Moderate positive) 11,613 tokens
2026-02-28 02:56
eval
Evaluated by llama-4-scout-wai: +0.80 (Strong positive) 0.00
2026-02-28 02:03
eval
Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive) 0.00
2026-02-28 01:59
eval
Evaluated by llama-3.3-70b-wai: +0.50 (Moderate positive)
2026-02-28 01:16
eval
Evaluated by llama-4-scout-wai: +0.80 (Strong positive) 0.00
2026-02-28 01:11
eval
Evaluated by llama-4-scout-wai: +0.80 (Strong positive)
build fe156f0 · deployed 2026-02-28 07:39 UTC · evaluated 2026-02-28 07:53:21 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.