Model Comparison
Model Editorial Structural Class Conf SETL Theme
claude-haiku-4-5 lite 0.00 ND Neutral 0.95 0.00 Technology optimization
deepseek/deepseek-v3.2-20251201 +0.15 +0.10 Mild positive 0.01 0.07 Technology & Information
@cf/meta/llama-4-scout-17b-16e-instruct lite 0.00 ND Neutral 1.00 0.00
Section claude-haiku-4-5 lite deepseek/deepseek-v3.2-20251201 @cf/meta/llama-4-scout-17b-16e-instruct lite
Preamble ND ND ND
Article 1 ND ND ND
Article 2 ND ND ND
Article 3 ND ND ND
Article 4 ND ND ND
Article 5 ND ND ND
Article 6 ND ND ND
Article 7 ND ND ND
Article 8 ND ND ND
Article 9 ND ND ND
Article 10 ND ND ND
Article 11 ND ND ND
Article 12 ND ND ND
Article 13 ND ND ND
Article 14 ND ND ND
Article 15 ND ND ND
Article 16 ND ND ND
Article 17 ND ND ND
Article 18 ND ND ND
Article 19 ND 0.15 ND
Article 20 ND ND ND
Article 21 ND ND ND
Article 22 ND ND ND
Article 23 ND ND ND
Article 24 ND ND ND
Article 25 ND ND ND
Article 26 ND ND ND
Article 27 ND 0.10 ND
Article 28 ND ND ND
Article 29 ND ND ND
Article 30 ND ND ND
0.00 LLaMA now goes faster on CPUs (justine.lol)
1372 points by lawrencechen 697 days ago | 451 comments on HN | Neutral ~lite vlight-1.3
Summary ~lite Technology optimization Neutral
Technical article on CPU matrix multiplication kernel optimization
EQ 0.00
SO 0.00
TD 0.00
Light evaluation by claude-haiku-4-5 · editorial channel only · no per-section breakdown available
Audit Trail 23 entries
2026-02-28 01:41 dlq Dead-lettered after 1 attempts: LLaMA now goes faster on CPUs - -
2026-02-28 01:38 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-28 01:37 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-28 01:36 dlq_replay DLQ message 97690 replayed to LLAMA_QUEUE: LLaMA now goes faster on CPUs - -
2026-02-28 00:42 dlq Dead-lettered after 1 attempts: LLaMA now goes faster on CPUs - -
2026-02-28 00:42 eval_failure Evaluation failed: AiError: 3030: This model's maximum context length is 24000 tokens. However, you requested 31067 tokens (14683 in the messages, 16384 in the completion). Please reduce the length of the messages or co - -
2026-02-28 00:42 eval_failure Evaluation failed: AiError: 3030: This model's maximum context length is 24000 tokens. However, you requested 31067 tokens (14683 in the messages, 16384 in the completion). Please reduce the length of the messages or co - -
2026-02-28 00:27 dlq Dead-lettered after 1 attempts: LLaMA now goes faster on CPUs - -
2026-02-28 00:27 eval_failure Evaluation failed: AiError: 3030: This model's maximum context length is 24000 tokens. However, you requested 31067 tokens (14683 in the messages, 16384 in the completion). Please reduce the length of the messages or co - -
2026-02-28 00:27 eval_failure Evaluation failed: AiError: 3030: This model's maximum context length is 24000 tokens. However, you requested 31067 tokens (14683 in the messages, 16384 in the completion). Please reduce the length of the messages or co - -
2026-02-28 00:16 dlq Dead-lettered after 1 attempts: LLaMA now goes faster on CPUs - -
2026-02-28 00:16 eval_failure Evaluation failed: AiError: 3030: This model's maximum context length is 24000 tokens. However, you requested 31067 tokens (14683 in the messages, 16384 in the completion). Please reduce the length of the messages or co - -
2026-02-28 00:16 eval_failure Evaluation failed: AiError: 3030: This model's maximum context length is 24000 tokens. However, you requested 31067 tokens (14683 in the messages, 16384 in the completion). Please reduce the length of the messages or co - -
2026-02-28 00:03 dlq Dead-lettered after 1 attempts: LLaMA now goes faster on CPUs - -
2026-02-28 00:03 eval_failure Evaluation failed: AiError: 3030: This model's maximum context length is 24000 tokens. However, you requested 31067 tokens (14683 in the messages, 16384 in the completion). Please reduce the length of the messages or co - -
2026-02-28 00:03 eval_failure Evaluation failed: AiError: 3030: This model's maximum context length is 24000 tokens. However, you requested 31067 tokens (14683 in the messages, 16384 in the completion). Please reduce the length of the messages or co - -
2026-02-27 22:45 eval_success Evaluated: Mild positive (0.13) - -
2026-02-27 22:45 eval Evaluated by deepseek-v3.2: +0.13 (Mild positive) 21,669 tokens
2026-02-27 22:25 dlq Dead-lettered after 1 attempts: LLaMA now goes faster on CPUs - -
2026-02-27 22:23 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-27 22:22 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-27 22:21 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral)
2026-02-27 22:08 eval Evaluated by claude-haiku-4-5: 0.00 (Neutral)