Model Comparison
Model Editorial Structural Class Conf SETL Theme
@cf/meta/llama-4-scout-17b-16e-instruct lite -0.38 ND Moderate negative 0.80 0.00 Military AI
deepseek/deepseek-v3.2-20251201 0.00 ND Neutral 0.00 No Content
@cf/meta/llama-3.3-70b-instruct-fp8-fast lite 0.00 ND Neutral 0.80 0.00 Tech Partnership
Section @cf/meta/llama-4-scout-17b-16e-instruct lite deepseek/deepseek-v3.2-20251201 @cf/meta/llama-3.3-70b-instruct-fp8-fast lite
Preamble ND ND ND
Article 1 ND ND ND
Article 2 ND ND ND
Article 3 ND ND ND
Article 4 ND ND ND
Article 5 ND ND ND
Article 6 ND ND ND
Article 7 ND ND ND
Article 8 ND ND ND
Article 9 ND ND ND
Article 10 ND ND ND
Article 11 ND ND ND
Article 12 ND ND ND
Article 13 ND ND ND
Article 14 ND ND ND
Article 15 ND ND ND
Article 16 ND ND ND
Article 17 ND ND ND
Article 18 ND ND ND
Article 19 ND ND ND
Article 20 ND ND ND
Article 21 ND ND ND
Article 22 ND ND ND
Article 23 ND ND ND
Article 24 ND ND ND
Article 25 ND ND ND
Article 26 ND ND ND
Article 27 ND ND ND
Article 28 ND ND ND
Article 29 ND ND ND
Article 30 ND ND ND
-0.38 Our Agreement with the Department of War (openai.comS:ND)
320 points by surprisetalk 15 hours ago | 226 comments on HN | Neutral Contested Policy · v3.7 · 2026-03-01 07:40:25 0
Summary No Data Neutral
The page content is incomplete and appears to be a fragment of a title or header. No substantive content about human rights or the agreement referenced in the title is observable. The evaluation returns neutral/no data across all UDHR articles due to insufficient content for assessment.
Article Heatmap
Preamble: ND — Preamble Preamble: No Data — Preamble P Article 1: ND — Freedom, Equality, Brotherhood Article 1: No Data — Freedom, Equality, Brotherhood 1 Article 2: ND — Non-Discrimination Article 2: No Data — Non-Discrimination 2 Article 3: ND — Life, Liberty, Security Article 3: No Data — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: ND — Privacy Article 12: No Data — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: ND — Freedom of Thought Article 18: No Data — Freedom of Thought 18 Article 19: ND — Freedom of Expression Article 19: No Data — Freedom of Expression 19 Article 20: ND — Assembly & Association Article 20: No Data — Assembly & Association 20 Article 21: ND — Political Participation Article 21: No Data — Political Participation 21 Article 22: ND — Social Security Article 22: No Data — Social Security 22 Article 23: ND — Work & Equal Pay Article 23: No Data — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: ND — Standard of Living Article 25: No Data — Standard of Living 25 Article 26: ND — Education Article 26: No Data — Education 26 Article 27: ND — Cultural Participation Article 27: No Data — Cultural Participation 27 Article 28: ND — Social & International Order Article 28: No Data — Social & International Order 28 Article 29: ND — Duties to Community Article 29: No Data — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Editorial Mean -0.38 Structural Mean ND
Weighted Mean 0.00 Unweighted Mean 0.00
Max 0.00 N/A Min 0.00 N/A
Signal 0 No Data 31
Volatility 0.00 (Low)
Negative 0 Channels E: 0.6 S: 0.4
SETL ND
FW Ratio 51% 0 facts · 0 inferences
Evidence 0% coverage
31 ND
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.00 (0 articles) Security: 0.00 (0 articles) Legal: 0.00 (0 articles) Privacy & Movement: 0.00 (0 articles) Personal: 0.00 (0 articles) Expression: 0.00 (0 articles) Economic & Social: 0.00 (0 articles) Cultural: 0.00 (0 articles) Order & Duties: 0.00 (0 articles)
HN Discussion 20 top-level · 26 replies
-_- 2026-02-28 20:39 UTC link
“The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”

So DoW did get the “all lawful purposes” language they were after, with reference to existing (inadequate, in my view) regulations around autonomous weapons and mass surveillance.

chiararvtk 2026-02-28 20:52 UTC link
"What if the government just changes the law or existing DoW policies?"

Our contract explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.

So, this apply only if they changes the law, not if they break the law.

"What happens if the government violates the terms of the contract?"

As with any contract, we could terminate it if the counterparty violates the terms. We don’t expect that to happen.

WE COULD [...]. Yeah, I believe

eoskx 2026-02-28 20:53 UTC link
Not great? Seems kind of loose language? It isn't OpenAI saying no autonomous weapons use, but only that use must be consistent with laws, regulations, and department policies: "The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities."

More of the same here. Not a wonder why the DoD signed with OpenAI and instead of Anthropic. Delegating morality to the law when you know the law is not adequate seems like "not a good thing".

"For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law."

eoskx 2026-02-28 20:53 UTC link
OpenAI: "let's delegate morality to laws that we know are wholly inadequate for AI to absolve ourselves of any moral responsiblity."
piker 2026-02-28 21:09 UTC link
> The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.

The emphasized language is the delta between what OpenAI agreed and what Anthropic wanted.

OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.

I personally can agree with both, and I do believe that the Administration's behavior towards Anthropic was abhorrant, bad-faith and ultimately damaging to US interests.

fluidcruft 2026-02-28 21:13 UTC link
Does OpenAI enforce those red lines in all contracts?

From what I can tell the Anthropic issue was triggered by something Palantir was doing as a contractor for DoW, not anything related to direct contracts between DoW and Anthropic, and DoW was annoyed that Anthropic interfered with what Palantir was up to.

In other words will OpenAI enforce these "red lines" against use by a third-party government contractor?

If not, this seems pretty meaningless if they are essentially playing PR while hiding behind Palantir.

FusionX 2026-02-28 21:21 UTC link
It's hard to believe that this was written in any good faith when there's so much beating around the bush and careful legalese wordplay.
zmmmmm 2026-02-28 21:21 UTC link
Saying that an entity with the power to make its own laws can use something for "all lawful purposes" is saying they can use it for anything.
caidan 2026-02-28 21:25 UTC link
How incredibly unsurprising. This is why it is pointless to make moral stands as employees when you do not ultimately have power over the companies decisions. The only power you have is to quit.

I wonder how many will do so, and how many will simply accept Sam’s AI written rationalization as this own and keep collecting their obscene pay packages…

Waterluvian 2026-02-28 21:37 UTC link
These communications offend me because they treat the audience like they’re stupid, stupid, stupid.

But I imagine that being honest about your corporate identity is suboptimal. It’s probably an important cognitive dissonance tool for the employees? It’s like when autocracies repeat big obvious lies endlessly. Gives those who want to opt out of reality an option.

burnJS 2026-02-28 21:40 UTC link
As a stealth ceo of a profitable SaaS. This is a nice reminder for my company to wind down its relationship with OpenAI. I have no doubt Anthropic will eventually become evil but at least they have a backbone today.

Goodbye Sam.

Edit: Also, referring to the DOD as the Department of War is cringe.

nkassis 2026-02-28 21:44 UTC link
This blog post really doesn't make it sound any better there is no clear refusal to participate in the questionable uses Anthropic was against. Merely must be legal and must be tested.

This feels like IBM in the 1930s selling tabulating machines to the Germans and downplaying their knowledge of their use. They seem to want us to naively believe they won't use it for exactly what the military has always wanted, autonomous weapons and mass surveillance. Further more there are much more mundane use they might make of the technology that is perfectly legal yet morally in gray areas.

Buttons840 2026-02-28 21:46 UTC link
I don't think Anthropic is a saint that will never do anything unethical. I don't think ChatGPT is any better or worse.

But I do think my cancelling ChatGPT so I can try Claude, at this time, sends the message I want to send, which is why I did it.

tfehring 2026-02-28 22:03 UTC link
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

My reading of this is that OpenAI's contract with the Pentagon only prohibits mass surveillance of US citizens to the extent that that surveillance is already prohibited by law. For example, I believe this implies that the DoW can procure data on US citizens en masse from private companies - including, e.g., granular location and financial transaction data - and apply OpenAI's tools to that data to surveil and otherwise target US citizens at scale. As I understand it, this was not the case with Anthropic's contract.

If I'm right, this is abhorrent. However, I've already jumped to a lot of incorrect conclusions in the last few days, so I'm doing my best to withhold judgment for now, and holding out hope for a plausible competing explanation.

(Disclosure, I'm a former OpenAI employee and current shareholder.)

furryrain 2026-02-28 22:09 UTC link
> Fully autonomous weapons. The cloud deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment.

Can anyone explain this constraint?

Why do fully autonomous weapons require edge deployment?

Does "fully autonomous" in this context mean "disconnected from the Internet"?

If so, can a drone with Internet connectivity use OpenAI?

Or maybe it's about on-premise requirements: the military doesn't want to depend on OpenAI's DCs for weaponry, and instead wants OpenAI in their own DCs for that?

solenoid0937 2026-02-28 23:09 UTC link
Any OAI employee with >$2M NW that chooses to stick around is simply devoid of a moral compass. No different than working for xAI or Palantir now.

I get you have tens of millions vesting. Hope you find it within you to be a good person instead of just a successful one.

maniacwhat 2026-02-28 23:19 UTC link
Hold on, isn't the government subject to the law anyway?

So a contract saying "they can only do x and y when it is legal", is not really any different to a contract without the legal clause. I.e. "they can do x and y".

dojomouse 2026-02-28 23:40 UTC link
> The cloud deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment.

… What?? Much of this seems duplicitous, but this isn’t even coherent. Is their implication that it’s not “autonomous” if it involves an api call to an external system? That mere definition would be extremely alarming.

nunez 2026-03-01 02:51 UTC link
This is extremely interesting. OpenAI is putting a lot of emphasis on their deployment being cloud-based (presumably GovCloud/C2S). Was Anthropic willing and cleared to deploy their stack high-side in NIPR/SIPR?

If that is the case, then that means that Anthropic is theoretically close to supporting private sector on-prem model deployments AND that this solution is FedRAMP High, which is more than enough for financial sector and healthcare. AWS, GCP and nVIDIA (to a lesser degree) should be insanely worried if that's the case.

derwiki 2026-03-01 06:43 UTC link
I appreciate that they posted this, but can’t fathom why. Does this assuage anyone’s concerns?
arppacket 2026-02-28 21:14 UTC link
Exactly, they're letting the lawless administration decide what the lawful purposes and the policies in general are.

The "human approval" will be someone clicking a YES button all the time, like Israeli officers did in the Gaza bombing.

NickNaraghi 2026-02-28 21:19 UTC link
That language is not consistent with:

> No use of OpenAI technology to direct autonomous weapons systems

bertil 2026-02-28 21:27 UTC link
Can their solution recommend to shoot at combatants lost at sea?

This is key because it's the textbook example of a war crime. It's also something that the current administration has bragged doing dozens of times.

More succinctly: who decides what is legal here? OpenAI, the Secretary of Defense, or a judge?

randlet 2026-02-28 21:30 UTC link
> The only power you have is to quit.

This is an incredible power when exercised en-masse.

coffeefirst 2026-02-28 21:32 UTC link
Wait, one of those contracts says you may not build the Terminator.

The other says you may build the Terminator if the DOD lawyers say it’s okay.

This is a major distinction.

einpoklum 2026-02-28 21:39 UTC link
> The only power you have is to quit.

Employees often have the power to oust the owner and take over the company; and more often than that have the power to have business grind to a halt. It does take a strong union and a culture of solidarity and sticking together of course, which I doubt we would find in a place like OpenAI.

notepad0x90 2026-02-28 21:40 UTC link
It's a bit worse, because in the case of mass surveillance, they can't just make their own law, they need to make that law and have 2/3rds of US states sign off on a constitutional amendment.

Aiding someone while you know they're trying to break the law is conspiracy to break the law. OpenAI is culpable. You can't sue the government in many cases, but you can with OpenAI.

dispersed 2026-02-28 21:44 UTC link
It's perhaps too late in this case, but this is what unions are for. Sam Altman + a handful of scabs can't keep the lights on at OpenAI if a critical mass of engineers refuse to work until this decision is reversed (or, even better, not made at all, since the union would be part of that process).
Trasmatta 2026-02-28 21:52 UTC link
And a nice bonus is that Claude is way better than ChatGPT right now anyway
avaer 2026-02-28 21:52 UTC link
The word "legal" is doing all of the heavy lifting. Considering the countless adjudicated illegal things that the government is doing publicly. What happens behind classified closed doors?

I guess you can consider it a moral stance that if the government constantly does illegal things you wouldn't trust them to follow the law.

I know that's not what Anthropic said but that's the gist I'm getting.

Buttons840 2026-02-28 21:54 UTC link
It's also good to demonstrate to these companies that we're willing to move. If these companies know their entire userbase will just pack up and move at the first controversy, there wont be any controversies.
gentleman11 2026-02-28 22:14 UTC link
Open ai, the former non-profit, whose board tried to fire the CEO for being deceptive, which is no longer open at all, isn't exactly about ethics these days.

Even on a personal level: OpenAI has changed it's privacy policy twice to let them gather data on me they weren't before. A lot of steps to disable it each time, tons of dark patterns. And the data checkout just bugs out too, it's a fake feature to hide how much they are using everything you type to them

fiatpandas 2026-02-28 22:18 UTC link
Exactly. And not only can they make their own rules, but they can draft and enforce them effectively in secret.
Imustaskforhelp 2026-02-28 22:33 UTC link
> I don't think Anthropic is a saint that will never do anything unethical. I don't think ChatGPT is any better or worse.

I sort of agree and think that over a long horizon, Open weights models are going to be the best / are the best

I do think only a fraction of companies might do what Anthropic did here. There must have been quite a significant pressure on them to fold but they didn't. So to me, I'd rather try to do atleast something to show companies that people do care about such things and its best if we have at the very least some unconditional morals which are not for sale no matter the price.

I think that we can still have disagreements with Anthropic on matters and I certainly still have some disagreements about their thoughts on Open Models for example but in all regards I would trust them as more trustworthy than OpenAI imho.

That being said, I do think that its worth telling that given that I don't have good GPU, I am gonna stop using Chatgpt as well and will use either Claude/(Kimi?) as well like many people are doing too. I do think that it might be the path going forward.

_alternator_ 2026-02-28 22:36 UTC link
The language allows for the DoD to use the model for anything that they deem legal. Read it carefully.

It begins “The Department of War may use the AI System for all lawful purposes…” and at no point does it limit that. Rather, it describes what the DOW considers lawful today, and allows them to change the regulations.

As Dario said, it’s weasel legal language, and this administration is the master of taking liberties with legalese, like killing civilians on boats, sending troops to cities, seizing state ballots, deporting immigrants for speech, etc etc etc.

Sam Altman is either a fool, or he thinks the rest of us are.

storus 2026-02-28 22:42 UTC link
Local inference might a better bet for you.
_alternator_ 2026-02-28 22:50 UTC link
This is exactly what it says: the only restrictions are the restrictions that are already in law. This seems like the weasel language Dario was talking about.
tombert 2026-02-28 22:51 UTC link
Are you saying we can't trust the words of a convicted fraudster?
kace91 2026-02-28 23:26 UTC link
How's claude for non coding tasks? For example using it as a google substitute for trivial questions, like a recipe or a phone review.

Genuinely asking, because I might follow your steps.

operator_nil 2026-02-28 23:46 UTC link
People often overlook how all the NSA-related activities and government overreach come with a nice memo from officials stating how "lawful" the questionable actions they're taking are.
layer8 2026-03-01 00:02 UTC link
The OpenAI employees had the power to have Sam Altman reinstated when he was ousted by the board two years ago.
layer8 2026-03-01 00:05 UTC link
I suppose it means they can refuse service on contractual grounds instead of having to sue the government for illegal actions after the fact.
irthomasthomas 2026-03-01 01:16 UTC link
Even worse is the kill-bot policy. The eventual-human-in-the-loop clause. aka as yolo mode or --dangerously-skip-permissions

Imagine arming chatgpt and letting it pick targets and launch missiles from clawdbot.

ajyoon 2026-03-01 01:26 UTC link
My bullshit alarms were blaring at this line. They really think we are that stupid.
squeaky-clean 2026-03-01 02:09 UTC link
Could just be latency. You don't want your terminator killbot to take 200ms to decide where to aim.
caseysoftware 2026-03-01 03:27 UTC link
> For example, I believe this implies that the DoW can procure data on US citizens en masse from private companies - including, e.g., granular location and financial transaction data - and apply OpenAI's tools to that data to surveil and otherwise target US citizens at scale.

Third Party Doctrine makes trouble for us once again.

Eliminate that and MANY nightmare scenarios disappear or become exceptionally more complicated.

Editorial Channel
What the content says
ND
Preamble Preamble

ND
Article 1 Freedom, Equality, Brotherhood

ND
Article 2 Non-Discrimination

ND
Article 3 Life, Liberty, Security

ND
Article 4 No Slavery

ND
Article 5 No Torture

ND
Article 6 Legal Personhood

ND
Article 7 Equality Before Law

ND
Article 8 Right to Remedy

ND
Article 9 No Arbitrary Detention

ND
Article 10 Fair Hearing

ND
Article 11 Presumption of Innocence

ND
Article 12 Privacy

ND
Article 13 Freedom of Movement

ND
Article 14 Asylum

ND
Article 15 Nationality

ND
Article 16 Marriage & Family

ND
Article 17 Property

ND
Article 18 Freedom of Thought

ND
Article 19 Freedom of Expression

ND
Article 20 Assembly & Association

ND
Article 21 Political Participation

ND
Article 22 Social Security

ND
Article 23 Work & Equal Pay

ND
Article 24 Rest & Leisure

ND
Article 25 Standard of Living

ND
Article 26 Education

ND
Article 27 Cultural Participation

ND
Article 28 Social & International Order

ND
Article 29 Duties to Community

ND
Article 30 No Destruction of Rights

Structural Channel
What the site does
Element Modifier Affects Note
Legal & Terms
Privacy
Privacy policy available. Standard data collection for service operation noted. No strong positive or negative privacy signals observed on-page for this evaluation.
Terms of Service
Terms of Service link present. Standard terms for AI service provider. No exceptional clauses directly observed on this page.
Identity & Mission
Mission
OpenAI's mission to ensure AGI benefits humanity is referenced elsewhere on site. No direct mission statement on this page.
Editorial Code
No editorial code or journalistic standards disclosed on this page.
Ownership
OpenAI ownership structure not detailed on this page.
Access & Distribution
Access Model
Mixed access model (free and paid). Not directly relevant to this page's content.
Ad/Tracking
Standard analytics likely in use. No specific tracking disclosures on this page.
Accessibility
Site appears to follow general web accessibility conventions. No dedicated accessibility statement observed on this page.
ND
Preamble Preamble

ND
Article 1 Freedom, Equality, Brotherhood

ND
Article 2 Non-Discrimination

ND
Article 3 Life, Liberty, Security

ND
Article 4 No Slavery

ND
Article 5 No Torture

ND
Article 6 Legal Personhood

ND
Article 7 Equality Before Law

ND
Article 8 Right to Remedy

ND
Article 9 No Arbitrary Detention

ND
Article 10 Fair Hearing

ND
Article 11 Presumption of Innocence

ND
Article 12 Privacy

ND
Article 13 Freedom of Movement

ND
Article 14 Asylum

ND
Article 15 Nationality

ND
Article 16 Marriage & Family

ND
Article 17 Property

ND
Article 18 Freedom of Thought

ND
Article 19 Freedom of Expression

ND
Article 20 Assembly & Association

ND
Article 21 Political Participation

ND
Article 22 Social Security

ND
Article 23 Work & Equal Pay

ND
Article 24 Rest & Leisure

ND
Article 25 Standard of Living

ND
Article 26 Education

ND
Article 27 Cultural Participation

ND
Article 28 Social & International Order

ND
Article 29 Duties to Community

ND
Article 30 No Destruction of Rights

Supplementary Signals
How this content communicates, beyond directional lean. Learn more
Epistemic Quality
How well-sourced and evidence-based is this content?
0.01 low claims
Sources
0.0
Evidence
0.0
Uncertainty
0.0
Purpose
0.1
Propaganda Flags
No manipulative rhetoric detected
0 techniques detected
Emotional Tone
Emotional character: positive/negative, intensity, authority
detached
Valence
0.0
Arousal
0.0
Dominance
0.0
Transparency
Does the content identify its author and disclose interests?
0.00
✗ Author
More signals: context, framing & audience
Solution Orientation
Does this content offer solutions or only describe problems?
0.00 problem only
Reader Agency
0.0
Stakeholder Voice
Whose perspectives are represented in this content?
0.00 0 perspectives
Temporal Framing
Is this content looking backward, at the present, or forward?
present unspecified
Geographic Scope
What geographic area does this content cover?
unspecified
Complexity
How accessible is this content to a general audience?
accessible low jargon none
Longitudinal 383 HN snapshots · 38 evals
+1 0 −1 HN
Audit Trail 58 entries
2026-03-01 08:02 model_divergence Cross-model spread 0.38 exceeds threshold (2 models) - -
2026-03-01 08:02 eval_success Lite evaluated: Moderate negative (-0.38) - -
2026-03-01 08:02 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 07:43 eval_success Evaluated: Neutral (0.00) - -
2026-03-01 07:43 eval Evaluated by deepseek-v3.2: 0.00 (Neutral) 7,405 tokens 0.00
2026-03-01 07:40 eval_success Evaluated: Neutral (0.00) - -
2026-03-01 07:40 eval Evaluated by deepseek-v3.2: 0.00 (Neutral) 7,850 tokens -0.01
2026-03-01 07:32 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 07:32 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-03-01 07:31 eval_success Evaluated: Neutral (0.01) - -
2026-03-01 07:31 eval Evaluated by deepseek-v3.2: +0.01 (Neutral) 9,564 tokens
2026-03-01 07:27 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 07:27 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-03-01 07:07 eval_success Lite evaluated: Moderate negative (-0.38) - -
2026-03-01 07:07 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 06:41 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 06:41 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-03-01 06:19 eval_success Lite evaluated: Moderate negative (-0.38) - -
2026-03-01 06:19 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 05:59 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 05:59 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-03-01 05:33 eval_success Lite evaluated: Moderate negative (-0.38) - -
2026-03-01 05:33 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 05:28 eval_success Lite evaluated: Moderate negative (-0.38) - -
2026-03-01 05:28 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 05:23 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 05:23 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-03-01 04:55 eval_success Lite evaluated: Moderate negative (-0.38) - -
2026-03-01 04:55 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 04:34 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 04:34 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) +0.20
reasoning
Neutral tech announcement
2026-03-01 04:29 eval_success Lite evaluated: Mild negative (-0.20) - -
2026-03-01 04:29 eval Evaluated by llama-3.3-70b-wai: -0.20 (Mild negative) -0.20
reasoning
Neutral tech announcement
2026-03-01 04:07 eval_success Lite evaluated: Moderate negative (-0.38) - -
2026-03-01 04:07 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 03:51 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 03:51 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-03-01 03:18 eval_success Lite evaluated: Moderate negative (-0.38) - -
2026-03-01 03:18 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 03:14 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 03:10 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-03-01 03:05 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-03-01 02:39 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 02:35 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 02:23 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) +0.20
reasoning
Neutral tech announcement
2026-03-01 01:44 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 01:38 eval Evaluated by llama-3.3-70b-wai: -0.20 (Mild negative) -0.20
reasoning
Neutral tech announcement
2026-03-01 01:05 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 01:00 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-03-01 00:12 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-03-01 00:11 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-02-28 23:27 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-02-28 23:27 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-02-28 23:22 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-02-28 22:30 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative) 0.00
reasoning
EDitorial on partnership with Department of War
2026-02-28 22:27 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Neutral tech announcement
2026-02-28 21:44 eval Evaluated by llama-4-scout-wai: -0.38 (Moderate negative)
reasoning
EDitorial on partnership with Department of War
2026-02-28 21:43 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral)
reasoning
Neutral tech announcement