+0.17 Stop Burning Your Context Window – How We Cut MCP Output by 98% in Claude Code (mksg.lu S:+0.03 )
397 points by mksglu 1 days ago | 82 comments on HN | Mild positive Editorial · v3.7 · 2026-03-01 07:38:59 0
Summary Digital Access & Privacy Advocates
The content is a technical blog post introducing Context Mode, an MCP server that reduces Claude Code context consumption by 98%. The strongest human rights engagement appears in Article 27 (cultural participation and scientific advancement) through open source advocacy, and Article 19 (freedom of expression) through technical knowledge sharing. The evaluation shows mild positive directionality overall, with advocacy for open access and privacy-conscious design, though limited by the technical focus.
Article Heatmap
Preamble: +0.10 — Preamble P Article 1: ND — Freedom, Equality, Brotherhood Article 1: No Data — Freedom, Equality, Brotherhood 1 Article 2: ND — Non-Discrimination Article 2: No Data — Non-Discrimination 2 Article 3: ND — Life, Liberty, Security Article 3: No Data — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: +0.03 — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: ND — Freedom of Thought Article 18: No Data — Freedom of Thought 18 Article 19: +0.11 — Freedom of Expression 19 Article 20: ND — Assembly & Association Article 20: No Data — Assembly & Association 20 Article 21: ND — Political Participation Article 21: No Data — Political Participation 21 Article 22: ND — Social Security Article 22: No Data — Social Security 22 Article 23: +0.10 — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: ND — Standard of Living Article 25: No Data — Standard of Living 25 Article 26: +0.15 — Education 26 Article 27: +0.39 — Cultural Participation 27 Article 28: ND — Social & International Order Article 28: No Data — Social & International Order 28 Article 29: ND — Duties to Community Article 29: No Data — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Editorial Mean +0.17 Structural Mean +0.03
Weighted Mean +0.17 Unweighted Mean +0.15
Max +0.39 Article 27 Min +0.03 Article 12
Signal 6 No Data 25
Volatility 0.11 (Medium)
Negative 0 Channels E: 0.6 S: 0.4
SETL +0.19 Editorial-dominant
FW Ratio 57% 12 facts · 9 inferences
Evidence 10% coverage
1H 3M 2L 25 ND
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.10 (1 articles) Security: 0.00 (0 articles) Legal: 0.00 (0 articles) Privacy & Movement: 0.03 (1 articles) Personal: 0.00 (0 articles) Expression: 0.11 (1 articles) Economic & Social: 0.10 (1 articles) Cultural: 0.27 (2 articles) Order & Duties: 0.00 (0 articles)
HN Discussion 17 top-level · 20 replies
mksglu 2026-02-28 10:02 UTC link
Author here. I shared the GitHub repo a few days ago (https://news.ycombinator.com/item?id=47148025) and got great feedback. This is the writeup explaining the architecture.

The core idea: every MCP tool call dumps raw data into your 200K context window. Context Mode spawns isolated subprocesses — only stdout enters context. No LLM calls, purely algorithmic: SQLite FTS5 with BM25 ranking and Porter stemming.

Since the last post we've seen 228 stars and some real-world usage data. The biggest surprise was how much subagent routing matters — auto-upgrading Bash subagents to general-purpose so they can use batch_execute instead of flooding context with raw output.

Source: https://github.com/mksglu/claude-context-mode Happy to answer any architecture questions.

mvkel 2026-02-28 13:51 UTC link
Excited to try this. Is this not in effect a kind of "pre-compaction," deciding ahead of time what's relevant? Are there edge cases where it is unaware of, say, a utility function that it coincidentally picks up when it just dumps everything?
nr378 2026-02-28 14:06 UTC link
Nice work.

It strikes me there's more low hanging fruit to pluck re. context window management. Backtracking strikes me as another promising direction to avoid context bloat and compaction (i.e. when a model takes a few attempts to do the right thing, once it's done the right thing, prune the failed attempts out of the context).

agrippanux 2026-02-28 16:57 UTC link
I am a happy user of this and have recommended my team also install it. It’s made a sizable reduction in my token use.
unxmaal 2026-02-28 18:21 UTC link
I did this accidentally while porting Go to IRIX: https://github.com/unxmaal/mogrix/blob/main/tools/knowledge-...
ZeroGravitas 2026-02-28 18:42 UTC link
I've seen a few projects like this. Shouldn't they in theory make the llms "smarter" by not polluting the context? Have any benchmarks shown this effect?
giancarlostoro 2026-02-28 18:55 UTC link
This sounds a little bit like rkt? Which trims output from other CLI applications like git, find and the most common tools used by Claude. This looks like it goes a little further which is interesting.

I see some of these AI companies adopting some of these ideas sooner or later. Trim the tokens locally to save on token usage.

https://github.com/rtk-ai/rtk

buremba 2026-02-28 19:00 UTC link
AFAIK Claude Code doesn't inject all the MCP output into the context. It limits 25k tokens and uses bash pipe operators to read the full output. That's at least what I see in the latest version.
specialp 2026-02-28 19:06 UTC link
Do you need 80+ tools in context? Even if reduced, why not use sub agents for areas of focus? Context is gold and the more you put into it unrelated to the problem at hand the worse your outcome is. Even if you don't hit the limit of the window. Would be like compressing data to read into a string limit rather than just chunking the data
esafak 2026-02-28 20:55 UTC link
If this breaks the cache it is penny wise, pound foolish; cached full queries have more information and are cheap. The article does not mention caching; does anyone know?

I just enable fat MCP servers as needed, and try to use skills instead.

andai 2026-02-28 21:33 UTC link
This article's specific brand of AI writing reminded me of Kevin's Small Talk

https://www.youtube.com/watch?v=bctjSvn-OC8

killingtime74 2026-03-01 01:45 UTC link
Thanks for this. I do most of my work in subagents for better parallelization. Is it possible to have it work there? Currently the stats say subagents didn't benefit from it.
blakec 2026-03-01 04:43 UTC link
The FTS5 index approach here is right, but I'd push further: pure BM25 underperforms on tool outputs because they're a mix of structured data (JSON, tables, config) and natural language (comments, error messages, docstrings). Keyword matching falls apart on the structured half.

I built a hybrid retriever for a similar problem, compressing a 15,800-file Obsidian vault into a searchable index for Claude Code. Stack is Model2Vec (potion-base-8M, 256-dimensional embeddings) + sqlite-vec for vector search + FTS5 for BM25, combined via Reciprocal Rank Fusion. The database is 49,746 chunks in 83MB. RRF is the important piece: it merges ranked lists from both retrieval methods without needing score calibration, so you get BM25's exact-match precision on identifiers and function names plus vector search's semantic matching on descriptions and error context.

The incremental indexing matters too. If you're indexing tool outputs per-session, the corpus grows fast. My indexer has a --incremental flag that hashes content and only re-embeds changed chunks. Full reindex of 15,800 files takes ~4 minutes; incremental on a typical day's changes is under 10 seconds.

On the caching question raised upthread: this approach actually helps prompt caching because the compressed output is deterministic for the same query. The raw tool output would be different every time (timestamps, ordering), but the retrieved summary is stable if the underlying data hasn't changed.

One thing I'd add to Context Mode's architecture: the same retriever could run as a PostToolUse hook, compressing outputs before they enter the conversation. That way it's transparent to the agent, it never sees the raw dump, just the relevant subset.

BeetleB 2026-03-01 04:44 UTC link
> With 81+ tools active,

I see your problem.

agentifysh 2026-03-01 07:10 UTC link
interesting...this shoudl work with codex too right ?
vishalw007 2026-03-01 07:13 UTC link
As a newbie user that doesn't understand much of this but has claude pro and wants to use it

1. Can this help me? 2. How?

Thanks for sharing and building this.

hereme888 2026-03-01 07:21 UTC link
The hooks seem too aggressive. Blocking all curl/wget/WebFetch and funneling everything through the sandbox for 56 KB snapshots sounds great, but not for curl api.example.com/health returning 200 bytes.

Compressing 153 git commits to 107 bytes means the LLM has to write the perfect extraction script before it can see the data. So if it writes a `git log --oneline | wc -l` when you needed specific commit messages, that information is gone.

The benchmarks assume the model always writes the right summarization code, which in practice it doesn't.

jonnycoder 2026-02-28 16:27 UTC link
It feels like the late 1990s all over again, but instead of html and sql, it’s coding agents. This time around, a lot of us are well experienced at software engineering and so we can find optimizations simply by using claude code all day long. We get an idea, we work with ai to help create a detailed design and then let it develop it for us.
elephanlemon 2026-02-28 16:55 UTC link
Agree. I’d like more fine grained control of context and compaction. If you spend time debugging in the middle of a session, once you’ve fixed the bugs you ought to be able to remove everything related to fixing them out of context and continue as you had before you encountered them. (Right now depending on your IDE this can be quite annoying to do manually. And I’m not aware of any that allow you to snip it out if you’ve worked with the agent on other tasks afterwards.)

I think agents should manage their own context too. For example, if you’re working with a tool that dumps a lot of logged information into context, those logs should get pruned out after one or two more prompts.

Context should be thought of something that can be freely manipulated, rather than a stack that can only have things appended or removed from the end.

ip26 2026-02-28 17:14 UTC link
Maybe the right answer is “why not both”, but subagents can also be used for that problem. That is, when something isn’t going as expected, fork a subagent to solve the problem and return with the answer.

It’s interesting to imagine a single model deciding to wipe its own memory though, and roll back in time to a past version of itself (only, with the answer to a vexing problem)

re5i5tor 2026-02-28 17:16 UTC link
Really intrigued and def will try, thanks for this.

In connecting the dots (and help me make sure I'm connecting them correctly), context-mode _does not address MCP context usage at all_, correct? You are instead suggesting we refactor or eliminate MCP tools, or apply concepts similar to context_mode in our MCPs where possible?

Context-mode is still very high value, even if the answer is "no," just want to make sure I understand. Also interested in your thoughts about the above.

I write a number of MCPs that work across all Claude surfaces; so the usual "CLI!" isn't as viable an answer (though with code execution it sometimes can be) ...

Edit: typo

esafak 2026-02-28 20:59 UTC link
Does your technique break the cache? edit: Thanks.
mksglu 2026-02-28 21:05 UTC link
Totally agree. Failed attempts are just noise once the right path is found. Auto-detecting retry patterns and pruning them down to the final working version feels very doable, especially for clear cases like lint or compilation fixes.
mksglu 2026-02-28 21:12 UTC link
Haven't looked at rtk closely but from the description it sounds like it works at the CLI output level, trimming stdout before it reaches the model. Context-mode goes a bit further since it also indexes the full output into a searchable FTS5 database, so the model can query specific parts later instead of just losing them. It's less about trimming and more about replacing a raw dump with a summary plus on-demand retrieval.
mksglu 2026-02-28 21:12 UTC link
That's true, Claude Code does truncate large outputs now. But 25k tokens is still a lot, especially when you're running multiple tools back to back. Three or four Playwright snapshots or a batch of GitHub issues and you've burned 100k tokens on raw data you only needed a few lines from. Context-mode typically brings that down to 1-2k per call while keeping the full output searchable if you need it later.
mksglu 2026-02-28 21:14 UTC link
Nice approach. Same core idea as context-mode but specialized for your build domain. You're using SQLite as a structured knowledge cache over YAML rule files with keyword lookup. Context-mode does something similar but domain-agnostic, using FTS5 with BM25 ranking so any tool output becomes searchable without needing predefined schemas. Cool to see the pattern emerge independently from a completely different use case.
mksglu 2026-02-28 21:14 UTC link
It doesn't break the cache. The raw data never enters the conversation history, so there's nothing to invalidate. A short summary goes into context instead of the full payload, and the model can search the full data from a local FTS5 index if it needs specifics later. Cache stays intact because you're just appending smaller messages to the conversation.
mksglu 2026-02-28 21:16 UTC link
That's a fair point and honestly the ideal approach. But in practice most people don't hand-curate their MCP server list per task. They install 5-6 servers and suddenly have 80 tools loaded by default. Context-mode doesn't solve the tool definition bloat, that's the input side problem. It handles the output side, when those tools actually run and dump data back. Even with a focused set of tools, a single Playwright snapshot or git log can burn 50k tokens. That's what gets sandboxed.
mksglu 2026-02-28 21:17 UTC link
That's the theory and it does hold up in practice. When context is 70% raw logs and snapshots, the model starts losing track of the actual task. We haven't run formal benchmarks on answer quality yet, mostly focused on measuring token savings. But anecdotally the biggest win is sessions lasting longer before compaction kicks in, which means the model keeps its full conversation history and makes fewer mistakes from lost context.
mksglu 2026-02-28 21:17 UTC link
Yeah it's basically pre-compaction, you're right. The key difference is nothing gets thrown away. The full output sits in a searchable FTS5 index, so if the model realizes it needs some detail it missed in the summary, it can search for it. It's less "decide what's relevant upfront" and more "give me the summary now, let me come back for specifics later."
mksglu 2026-02-28 21:17 UTC link
Thanks, really appreciate hearing that! Glad it's working well for your team.
RyanShook 2026-02-28 21:58 UTC link
I’m also trying to see which one makes more sense. Discussion about rtk started today: https://news.ycombinator.com/item?id=47189599
IncreasePosts 2026-03-01 00:46 UTC link
I do this with my agents. Basically, every "work" oriented call spawns a subprocess which does not add anything to the parent context window. When the subprocess completes the task, I ask it to 1) provide a complete answer, 2) provide a succinct explanation of how the answer was arrived at, 3) provide a succinct explanation of any attempts which did not work, and 4) Anything learned during the process which may be useful in the future. Then, I feed those 4 answers back to the parent as if they were magically arrived at. Another thing I do for managing context window is, any tool/MCP call has its output piped into a file. The LLM then can only read parts of the file and only add that to its context if it is sufficient. For example, execute some command that produces a lot of output and ultimately ends in "Success!", the LLM can just tail the last line to see if it succeeded. If it did, the rest of the output doesn't need to be read. if it fails, usually the failure message is at the end of the log. Something I'm working on now is having a smaller local model summarize the log output and feed that summarization to the more powerful LLM (because I can run my local model for ~free, but it is no where near as capable as the cloud models). I don't keep up with SOTA so I have no idea if what I'm doing is well known or not, but it works for me and my set up.
nitinreddy88 2026-03-01 01:09 UTC link
Any reason why it doesn't support Codex? I believe the idea and implementation seems to be pretty much agent independent
lkbm 2026-03-01 03:31 UTC link
Small suggestion: Link to the Cloudflare Code mode post[0] in the blog post where you mentio it. It's linked in the README, but when I saw it in the blog post, I had to Google it.

[0] https://blog.cloudflare.com/code-mode-mcp/

nextaccountic 2026-03-01 03:32 UTC link
Can this be used with other agents? I'm looking specifically into the Zed Agent
monkpit 2026-03-01 05:13 UTC link
“you’re holding it wrong” - ok, or we could make it better
Editorial Channel
What the content says
+0.30
Article 27 Cultural Participation
High Advocacy Practice
Editorial
+0.30
SETL
+0.21

Strong advocacy for open source (MIT license) and participation in scientific advancement

+0.20
Article 12 Privacy
Medium Advocacy Framing
Editorial
+0.20
SETL
+0.24

Positively frames sandboxing that isolates data and prevents raw data exposure

+0.15
Article 19 Freedom of Expression
Medium Advocacy Practice
Editorial
+0.15
SETL
+0.12

Promotes open source tool sharing and technical knowledge dissemination

+0.15
Article 26 Education
Medium Advocacy
Editorial
+0.15
SETL
ND

Promotes technical education and knowledge sharing through open source

+0.10
Preamble Preamble
Low Advocacy
Editorial
+0.10
SETL
ND

Promotes human dignity through tool that empowers developers with better access to technology

+0.10
Article 23 Work & Equal Pay
Low Advocacy
Editorial
+0.10
SETL
ND

Indirectly supports work efficiency and technical skill development

ND
Article 1 Freedom, Equality, Brotherhood

No direct engagement with dignity, equality, or brotherhood concepts

ND
Article 2 Non-Discrimination

No mention of discrimination, race, gender, religion, etc.

ND
Article 3 Life, Liberty, Security

No mention of life, liberty, or security of person

ND
Article 4 No Slavery

No mention of slavery or servitude

ND
Article 5 No Torture

No mention of torture, punishment, or cruel treatment

ND
Article 6 Legal Personhood

No mention of legal recognition or personhood

ND
Article 7 Equality Before Law

No mention of equal protection or discrimination

ND
Article 8 Right to Remedy

No mention of judicial remedies or rights enforcement

ND
Article 9 No Arbitrary Detention

No mention of arbitrary detention or arrest

ND
Article 10 Fair Hearing

No mention of fair trials or impartial tribunals

ND
Article 11 Presumption of Innocence

No mention of presumption of innocence or criminal law

ND
Article 13 Freedom of Movement

No mention of freedom of movement or residence

ND
Article 14 Asylum

No mention of asylum or persecution

ND
Article 15 Nationality

No mention of nationality or statelessness

ND
Article 16 Marriage & Family

No mention of marriage or family

ND
Article 17 Property

No mention of property rights

ND
Article 18 Freedom of Thought

No mention of religion, conscience, or belief

ND
Article 20 Assembly & Association

No mention of assembly or association

ND
Article 21 Political Participation

No mention of participation in government or elections

ND
Article 22 Social Security

No mention of social security or economic rights

ND
Article 24 Rest & Leisure

No mention of rest, leisure, or working hours

ND
Article 25 Standard of Living

No mention of standard of living, health, or welfare

ND
Article 28 Social & International Order

No mention of social order or rights realization

ND
Article 29 Duties to Community

No mention of duties, community, or limitations

ND
Article 30 No Destruction of Rights

No mention of rights destruction or interpretation

Structural Channel
What the site does
Element Modifier Affects Note
Legal & Terms
Privacy
No privacy policy or data handling information found on page
Terms of Service
No terms of service found on page
Identity & Mission
Mission
No explicit mission statement found on page
Editorial Code
No editorial code or standards found on page
Ownership
Page is personal blog/portfolio of Mert Köseoğlu
Access & Distribution
Access Model +0.15
Article 27
Content promotes free open source tool (MIT license) and provides installation instructions
Ad/Tracking -0.05
Article 12
Google Analytics (gtag) tracking present without user consent mechanism
Accessibility
No observable accessibility features mentioned on page
+0.15
Article 27 Cultural Participation
High Advocacy Practice
Structural
+0.15
Context Modifier
+0.15
SETL
+0.21

Provides free tool, installation instructions, and GitHub repository

+0.05
Article 19 Freedom of Expression
Medium Advocacy Practice
Structural
+0.05
Context Modifier
0.00
SETL
+0.12

Blog format enables expression and sharing of technical ideas

-0.10
Article 12 Privacy
Medium Advocacy Framing
Structural
-0.10
Context Modifier
-0.05
SETL
+0.24

Google Analytics tracking present without user consent mechanism

ND
Preamble Preamble
Low Advocacy

No observable structural engagement with preamble concepts

ND
Article 1 Freedom, Equality, Brotherhood

No structural elements addressing dignity or equality

ND
Article 2 Non-Discrimination

No observable structural barriers or inclusive design elements

ND
Article 3 Life, Liberty, Security

No security or safety features discussed

ND
Article 4 No Slavery

No relevant structural elements

ND
Article 5 No Torture

No relevant structural elements

ND
Article 6 Legal Personhood

No relevant structural elements

ND
Article 7 Equality Before Law

No relevant structural elements

ND
Article 8 Right to Remedy

No relevant structural elements

ND
Article 9 No Arbitrary Detention

No relevant structural elements

ND
Article 10 Fair Hearing

No relevant structural elements

ND
Article 11 Presumption of Innocence

No relevant structural elements

ND
Article 13 Freedom of Movement

No relevant structural elements

ND
Article 14 Asylum

No relevant structural elements

ND
Article 15 Nationality

No relevant structural elements

ND
Article 16 Marriage & Family

No relevant structural elements

ND
Article 17 Property

No relevant structural elements

ND
Article 18 Freedom of Thought

No relevant structural elements

ND
Article 20 Assembly & Association

No structural support for association or assembly

ND
Article 21 Political Participation

No relevant structural elements

ND
Article 22 Social Security

No relevant structural elements

ND
Article 23 Work & Equal Pay
Low Advocacy

No structural support for labor rights or conditions

ND
Article 24 Rest & Leisure

No relevant structural elements

ND
Article 25 Standard of Living

No relevant structural elements

ND
Article 26 Education
Medium Advocacy

No structural educational elements

ND
Article 28 Social & International Order

No relevant structural elements

ND
Article 29 Duties to Community

No relevant structural elements

ND
Article 30 No Destruction of Rights

No relevant structural elements

Supplementary Signals
How this content communicates, beyond directional lean. Learn more
Epistemic Quality
How well-sourced and evidence-based is this content?
0.84 medium claims
Sources
0.9
Evidence
0.8
Uncertainty
0.7
Purpose
1.0
Propaganda Flags
No manipulative rhetoric detected
0 techniques detected
Emotional Tone
Emotional character: positive/negative, intensity, authority
measured
Valence
+0.4
Arousal
0.5
Dominance
0.7
Transparency
Does the content identify its author and disclose interests?
1.00
✓ Author
More signals: context, framing & audience
Solution Orientation
Does this content offer solutions or only describe problems?
0.96 solution oriented
Reader Agency
0.9
Stakeholder Voice
Whose perspectives are represented in this content?
0.50 2 perspectives
Speaks: individuals
About: corporation
Temporal Framing
Is this content looking backward, at the present, or forward?
present medium term
Geographic Scope
What geographic area does this content cover?
global
Complexity
How accessible is this content to a general audience?
technical high jargon domain specific
Longitudinal 514 HN snapshots · 62 evals
+1 0 −1 HN
Audit Trail 82 entries
2026-03-01 08:29 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 08:29 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 08:02 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 08:02 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 07:39 eval_success Evaluated: Mild positive (0.17) - -
2026-03-01 07:38 eval Evaluated by deepseek-v3.2: +0.17 (Mild positive) 14,580 tokens
2026-03-01 07:35 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 07:35 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 07:06 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 07:06 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 06:49 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 06:49 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 06:17 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 06:17 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 06:07 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 06:07 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 05:33 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 05:33 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 05:27 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 05:27 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 05:01 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 05:01 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 04:46 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 04:46 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 04:41 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 04:41 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 04:10 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 04:10 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 04:04 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 04:04 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 03:21 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 03:21 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 03:16 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 03:16 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 02:49 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 02:49 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 02:44 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 02:44 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 02:44 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-01 02:44 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 02:39 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 01:55 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 01:51 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 01:20 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 01:09 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 00:25 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-03-01 00:22 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-03-01 00:20 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 23:36 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 23:34 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 22:39 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 22:36 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 22:32 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 21:59 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 21:53 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 21:50 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 21:44 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 21:07 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 21:03 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 20:16 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 20:11 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 20:11 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 19:25 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 19:23 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 18:43 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 18:42 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 18:37 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 18:18 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 18:13 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 18:12 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 17:50 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 17:47 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 17:45 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 17:42 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 17:21 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 17:15 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 16:54 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 16:50 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED technical article on optimizing AI context usage
2026-02-28 16:49 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Tech blog with no rights stance
2026-02-28 16:18 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral)
reasoning
ED technical article on optimizing AI context usage
2026-02-28 16:17 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral)
reasoning
Tech blog with no rights stance
2026-02-28 13:54 eval Evaluated by claude-haiku-4-5-20251001: +0.19 (Mild positive)