896 points by jart 513 days ago | 345 comments on HN
| Mild positive Editorial · v3.7· 2026-02-28 09:27:31
Summary Open Knowledge & Free Expression Acknowledges
A technical blog post analyzing mutex performance across platforms that embodies human rights values through free expression, technical education, and open scientific discourse. While human rights are not the primary focus, the content demonstrates advocacy for professional responsibility, efficient resource allocation, and community participation in knowledge sharing.
> The reason why Cosmopolitan Mutexes are so good is because I used a library called nsync. It only has 371 stars on GitHub, but it was written by a distinguished engineer at Google called Mike Burrows.
Indeed this is the first time I've heard of nsync, but Mike Burrows also wrote Google's production mutex implementation at https://github.com/abseil/abseil-cpp/blob/master/absl/synchr... I'm curious why this mutex implementation is absent from the author's benchmarks.
By the way if the author farms out to __ulock on macOS, this could be more simply achieved by just using the wait(), notify_one() member functions in the libc++'s atomic library.
A while ago there was also a giant thread related to improving Rust's mutex implementation at https://github.com/rust-lang/rust/issues/93740#issuecomment-.... What's interesting is that there's a detailed discussion of the inner workings of almost every popular mutex implementation.
I have to admit that I have an extremely visceral, negative feeling whenever I see a mutex, simply because I've had to debug enough code written by engineers who don't really know how to use them, so a large part of previous jobs has been to remove locks from code and replace with some kind of queue or messaging abstraction [1].
It's only recently that I've been actively looking into different locking algorithms, just because I've been diving in head-first to a lot of pure concurrency and distributed computing theory, a lot of which is about figuring out clever ways of doing mutexes with different tradeoffs.
I've gotten a lot better with them now, and while I still personally will gravitate towards messaging-based concurrency over locks, I do feel the need to start playing with some of the more efficient locking tools in C, like nsync (mentioned in this article).
[1] Before you give me shit over this, generally the replacement code runs at roughly the same speed, and I at least personally think that it's easier to reason about.
Hrm. Big fan of Justine and their work. However this is probably the least interesting benchmark test case for a Mutex. You should never have a bunch of threads constantly spamming the same mutex. So which mutex implementation best handles this case isn’t particularly interesting, imho.
Always cool to see new mutex implementations and shootouts between them, but I don’t like how this one is benchmarked. Looks like a microbenchmark.
Most of us who ship fast locks use very large multithreaded programs as our primary way of testing performance. The things that make a mutex fast or slow seem to be different for complex workloads with varied critical section length, varied numbers of threads contending, and varying levels of contention.
(Source: I wrote the fast locks that WebKit uses, I’m the person who invented the ParkingLot abstraction for lock impls (now also used in Rust and Unreal Engine), and I previously did research on fast locks for Java and have a paper about that.)
> It's still a new C library and it's a little rough around the edges. But it's getting so good, so fast, that I'm starting to view not using it in production as an abandonment of professional responsibility.
What an odd statement. I appreciate the Cosmopolitan project, but these exaggerated claims of superiority are usually a pretty bad red flag.
> In 2012, Tunney started working for Google as a software engineer.[4] In March 2014, Tunney petitioned the US government on We the People to hold a referendum asking for support to retire all government employees with full pensions, transfer administrative authority to the technology industry, and appoint the executive chairman of Google Eric Schmidt as CEO of America.
If it's so good, why haven't all C libraries adopted the same tricks?
My betting is that its tricks are only always-faster for certain architectures, or certain CPU models, or certain types of workload / access patterns... and a proper benchmarking of varied workloads on all supported hardware would not show the same benefits.
Alternatively, maybe the semantics of the pthread API (that cosmopolitan is meant to be implementing) are somehow subtly different and this implementation isn't strictly compliant to the spec?
I can't imagine it's that the various libc authors aren't keeping up in state-of-the-art research on OS primitives...
So on the one hand, all this Cosmo/ape/redbean stuff sounds incredible, and the comments on these articles are usually pretty positive and don’t generally debunk the concepts. But on the other hand, I never hear mention of anyone else using these things (I get that not everyone shares what they’re doing in a big way, but after so many years I’d expect to have seen a couple project writeups talk about them). Every mention of Cosmo/ape/redbean I’ve ever seen is from Justine’s site.
So I’ve gotta ask: Is there a catch? Are these tools doing something evil to achieve what they’re achieving? Is the whole thing a tom7-esque joke/troll that I don’t get because I’m not as deep into compilers/runtimes? Or are these really just ingenious tools that haven’t caught on yet?
> I've even had the opportunity to make upstream contributions. For example, I found and fixed a bug in his mutex unlock function that had gone undiscovered for years.
I see a stream of improvements to the vendored in nsync inside the cosmopolitan project [1]. Are you planning on upstreaming most of those too?
A separate question -- is using the upstream nsync as safe as using your fork?
I had the pleasure of reverse-engineering win32 SRWLOCKs, and based on the author description of nsync it is very close to how SRWLOCK works internally. Kind of surprised how much faster nsync is compared to SRWLOCK.
Production isn't about speed, efficiency, or obviously "clever hacks."
If I have to sacrifice 50% of my efficiency to ensure that I never get called on Sunday at 3am to fix a broken system, no kidding, I'll make that trade every time.
Production is about _reliability_. And writing reliable code is 10x harder than writing "fast" code.
Threads and mutexes are the most complicating things in computer science. I am always skeptical of new implementations until they've been used for several years at scale. Bugs in these threading mechanisms often elude even the most intense scrutiny. When Java hit the scene in the mid 90s it exposed all manner of thread and mutex bugs in Solaris. I don't want the fastest mutex implementation - I want a reliable one.
As gamedev I came to love slow mutexes that do a lot of debug things in all 'developer' builds. Have debug names/IDs, track owners, report time spent in contention to profiler, report ownership changes to profiler...
People tend to structure concurrency differently and games came to some patterns to avoid locks. But they are hard to use and require programmer to restructure things. Most of the code starts as 'lets slap a lock here and try to pass the milestone'. Even fast locks will be unpredictably slow and will destroy realtime guarantees if there were any. They can be fast on average but the tail end is never going to go away. I don't want to be that guy who will come back to it chasing 'oh our game has hitches' but I am usually that guy.
Use slow locks people. The ones that show big red in profiler. Refactor them out when you see them being hit.
I know its a tall order. I can count people who know how to use profiler by fingers on AAA production. And it always like that no matter how many productions I see :)
ps. sorry for a rant. Please, continue good research into fast concurrency primitives and algorithms.
I don't get it. Aren't most of these nsync tricks also implemented in glibc mutexes? The only thing in that list that's new to me, is long waiting. But even then I don't get it: futexes are already supposed to handle contended waiting.
If anyone's interested in this general subject matter, a while back I did some academic research on highly scalable locks where we came up with some very high performance reader-writer locks:
> It's still a new C library and it's a little rough around the edges. But it's getting so good, so fast, that I'm starting to view not using it in production as an abandonment of professional responsibility.
This code benchmarks mutex contention, not mutex lock performance. If you're locking like this, you should reevaluate your code. Each thread locks and unlocks the mutex for every increment of g_chores. This creates an overhead of acquiring and releasing the mutex frequently (100,000 times per thread). This overhead masks the real performance differences between locking mechanisms because the benchmark is dominated by lock contention rather than actual work. Benchmarks such as this one are useless.
I feel the similarly about C"s "volatile" (when used in multithreaded code rather than device drivers). I've seen people scatter volatile around randomly until the problem goes away. Given that volatile significantly disturbs the timing of a program, any timing sensitive bugs can be masked by adding it around randomly.
Although it does get there eventually, that Rust thread is about Mara's work, which is why it eventually mentions her January 2023 book.
The current Rust mutex implementation (which that thread does talk about later) landed earlier this year and although if you're on Linux it's not (much?) different, on Windows and Mac I believe it's new work.
That said, Mara's descriptions of the guts of other people's implementations is still interesting, just make sure to check if they're out-dated for your situation.
What are some examples of people using mutexes wrong? I know one gotcha is you need to maintain a consistent hierarchy. Usually the easiest way to not get snagged by that, is to have critical sections be small and pure. Java's whole MO of letting people add a synchronized keyword to an entire method was probably not the greatest idea.
what else would you measure? certainly the uncontended case is important and a baseline, but otherwise this is kind of weak point for mutexes - that if you don't handle contention well then you have idle hardware or lots of additional scheduler work or kernel crossings.
[edit - I forget to even mention one of the most important things, that locks that reform poorly under contention can have really negative systemic effects like hot spotting the memory network, and that would show up here too]
> You should never have a bunch of threads constantly spamming the same mutex.
I'm not sure I agree with this assessment. I can think of a few cases where you might end up with a bunch of threads challenging the same mutex.
A simple example would be something like concurrently populating some data structure (list/dict/etc). Yes, you could accomplish this with message passing, but that uses more memory and would be slower than just having everything wait to write to a shared location.
I'd say the vast majority of cases where I use a lock/semaphore is around very expensive resources, where the utilization of that resource vastly outweighs any performance overhead of the lock.
I'd like to point out that Justine's claims are usually correct. It's just her shtick (or personality?) to use hyperbole and ego-stroking wording. I can see why some might see it as abrasive (it has caused drama before, namely in llamacpp).
Message passing is just outsourcing the lock, right? For example a Go channel is internally synchronized, nothing magic about it.
Most of the mutex tragedies I have seen in my career have been in C, a useless language without effective scopes. In C++ it's pretty easy to use a scoped lock. In fact I'd say I have had more trouble with people who are trying to avoid locks than with people who use them. The avoiders either think their program order is obviously correct (totally wrong on modern CPUs) or that their atomics are faster (wrong again on many CPUs).
> I can't imagine it's that the various libc authors aren't keeping up in state-of-the-art research on OS primitives...
is this sarcasm?
(I don't know any libc maintainers, but as a maintainer of a few thingies myself, I do not try to implement state-of-the-art research, I try to keep my thingies stable and ensure the performance is acceptable; implementing research is out of my budget for "maintenance")
APE works through cunning trickery that might get patched out any day now (and in OpenBSD, it has been).
Most people producing cross-platform software don't want a single executable that runs on every platform, they want a single codebase that works correctly on each platform they support.
With that in mind that respect, languages like go letting you cross compile for all your targets (provided you avoid CGO) is delightful... but the 3-ways-executable magic trick of APE, while really clever, doesn't inspire confidence it'll work forever, and for the most part it doesn't gain you anything. Each platform has their own packaging/signing requirements. You might as well compile a different target for each platform.
Mike was a legend by the time I got to AV. The myth was that any time the search engine needed to be faster, he came in and rewrote a few core functions and went back to whatever else he was doing. Might be true, I just can't verify it personally. Extremely smart engineer who cares about efficiency.
We did not, however, run on one server for any length of time.
To add to this, as the original/lead author of a desktop app that frequently runs with many tens of threads, I'd like to see numbers on performance in non-heavily contended cases. As a real-time (audio) programmer, I am more concerned with (for example) the cost to take the mutex even when it is not already locked (which is the overwhelming situation in our app). Likewise, I want to know the cost of a try-lock operation that will fail, not what happens when N threads are contending.
Of course, with Cosmopolitan being open source and all, I could do these measurements myself, but still ...
> remove locks from code and replace with some kind of queue or messaging abstraction
Shared-nothing message passing reflects the underlying (modern) computer architecture more closely, so I'd call the above a good move. Shared memory / symmetric multiprocessing is an abstraction that leaks like a sieve; it no longer reflects how modern computers are built (multiple levels of CPU caches, cores, sockets, NUMA, etc).
I feel that there’s a certain amount of hubris that comes along with spending long periods of time solo-coding on a computer, and perhaps unwittingly starved of social contact. Without any checks on you or your work’s importance (normally provided by your bog-standard “job”), your achievements take on a grandeur that they might not have broadly earned, as impressive as they might be.
An example is APE (which I otherwise feel is a very impressive hack). One criticism might be “oh, so I not only get to be insecure on one platform, I can be insecure on many all at once?”
The longer you spend in technology, the more you realize that there are extremely few win-wins and a very many win-somes, lose-somes (tradeoffs)
If Burrows wants my C11 atomics refactoring then he shall have it. Beyond that, my changes mostly concern libc integration, systems integration, and portability. Our projects have different goals in those areas, so I'm not sure he needs them.
reposting my comment from another time this discussion came up:
"Cosmopolitan has basically always felt like the interesting sort of technical loophole that makes for a fun blog post which is almost guaranteed to make it to the front page of HN (or similar) purely based in ingenuity & dedication to the bit.
as a piece of foundational technology, in the way that `libc` necessarily is, it seems primarily useful for fun toys and small personal projects.
with that context, it always feels a little strange to see it presented as a serious alternative to something like `glibc`, `musl`, or `msvcrt`; it’s a very cute hack, but if i were to find it in something i seriously depend on i think i’d be a little taken aback."
This style of mutex will also power PyMutex in Python 3.13. I have real-world benchmarks showing how much faster PyMutex is than the old PyThread_type_lock that was available before 3.13.
Politics, not-invented-here syndrome, old maintainers.
It takes forever to change something in glibc, or the C++ equivalent.
There are many kinds of synchronization primitives. pthreads only supports a subset. If you are limiting yourself to them, you are most likely leaving performance on the table, but you gain portability.
I am only speaking for myself here. While cosmo and ape do seem very clever, I do not need this type of clever stuff in my work if the ordinary stuff already works fine.
Like for example if I can already cross-compile my project to other OSes and platforms or if I've got the infra to build my project for other OSes and platforms, I've no reason to look for a solution that lets me build one binary that works everywhere.
There's also the thing that ape uses clever hacks to be able to run on multiple OSes. What if those hacks break someday due to how executable formats evolve? What if nobody has the time to patch APE to make it compatible with those changes?
But my boring tools like gcc, clang, go, rust, etc. will continue to get updated and they will continue to work with evolving OSes. So I just tend to stick with the boring. That's why I don't bother with the clever because the boring just works for me.
Mozilla llamafile uses it. Bundles model weights and an executable into a single file, that can be run from any cosmo/ape platform, and spawns a redbean http server for you to interact with the LLM. Can also run it without the integrated weights, and read weights from the filesystem. It's the easiest "get up and go" for local LLMs you could possibly create.
Yeah, that specific benchmark is actually likely to prefer undesirable behaviors, for example pathological unfairness: clearly the optimal scheduling of those threads runs first all the increments from the first thread, then all of the second thread, etc... because this will minimize inter-processor traffic.
A mutex that sleeps for a fixed amount (for example 100us) on lock failure acquisition will probably get very close to that behavior (since it almost always bunches), and "win" the benchmark. Still, that would be a terrible mutex for any practical application where there is any contention.
This is not to say that this mutex is not good (or that pthread mutexes are not bad), just that the microbenchmark in question does not measure anything that predicts performance in a real application.
Wild. Then again, in 2012 I was on a grippy sock vacation.
Editorial Channel
What the content says
+0.50
Article 19Freedom of Expression
High Advocacy Framing
Editorial
+0.50
SETL
+0.32
Content is direct exercise of freedom of opinion and expression. Blog post shares technical analysis, benchmarks, and recommendations openly. Author expresses critical opinions about software efficiency and corporate profit extraction ('If essential unquestioned tools are this wasteful then it's no wonder Amazon Cloud makes such a fortune') without editorial suppression.
FW Ratio: 57%
Observable Facts
Blog post publishes original technical analysis and comparative benchmarks across platforms
Content openly criticizes Microsoft, glibc, and musl implementations
Post includes social critique: 'If essential unquestioned tools are this wasteful then it's no wonder Amazon Cloud makes such a fortune'
Content freely accessible without authentication or payment barrier
Inferences
Publishing technical analysis and social critique without censorship demonstrates exercise of free expression
Public accessibility enables information sharing and freedom of opinion
Willingness to critique established software shows freedom from prior approval requirement
+0.40
Article 26Education
High Advocacy Framing
Editorial
+0.40
SETL
+0.28
Content is explicitly educational. Post teaches mutex implementation concepts in detail, explains nsync algorithm, references academic papers (Drepper's 'Futexes Are Tricky' and 'What Every Programmer Should Know About Memory'), provides complete source code for learning and experimentation.
FW Ratio: 57%
Observable Facts
Post includes full C source code for benchmark and spin lock implementations
Content explains nsync algorithm with detailed technical descriptions of optimization techniques
Multiple academic paper citations provided with titles (Drepper papers referenced)
Content accessible without paywall or subscription requirement
Inferences
Detailed technical explanations and source code enable readers to learn complex concepts
Free access to educational material supports right to education
Academic citations situate technical knowledge within broader scholarly context
+0.40
Article 27Cultural Participation
High Advocacy Framing
Editorial
+0.40
SETL
+0.20
Content is contribution to scientific and technical community. Author shares original research (benchmarks across platforms), participates in open-source development, makes upstream contributions to nsync, and enables peer learning through detailed explanation.
FW Ratio: 57%
Observable Facts
Post reports original benchmark research across Windows, Linux, and macOS platforms
Author explicitly states 'I've had the opportunity to make upstream contributions' to nsync
Content references GitHub source code locations enabling peer review and community participation
Funding acknowledges 'generous contributions of our developer community on Discord'
Inferences
Sharing original research contributes to collective scientific knowledge
Upstream contributions and open-source integration demonstrate participation in scientific community
Public documentation with source references enables peer review and collaborative improvement
+0.20
Article 23Work & Equal Pay
Medium Advocacy
Editorial
+0.20
SETL
ND
Content frames professional software engineering as having duty to produce efficient implementations. Author states 'I'm starting to view not using it in production as an abandonment of professional responsibility,' positioning responsible work as ethical obligation. Funding section acknowledges fair compensation for labor.
FW Ratio: 60%
Observable Facts
Author states: 'I'm starting to view not using it in production as an abandonment of professional responsibility'
Content emphasizes professional duty around efficiency and resource stewardship
Framing professional responsibility around efficiency suggests concern for workers and fair resource allocation
Transparent funding demonstrates respect for fair compensation of technical labor
+0.20
Article 25Standard of Living
Medium Advocacy
Editorial
+0.20
SETL
ND
Content critiques wasteful software as contributing to corporate profit extraction at public expense. Author notes that inefficient tools benefit 'Amazon Cloud' and frames this as social problem requiring responsible stewardship.
FW Ratio: 60%
Observable Facts
Author states: 'If essential unquestioned tools are this wasteful then it's no wonder Amazon Cloud makes such a fortune'
Content frames poor software efficiency as preventing users from maintaining adequate computational resources
Post implies that efficient software practices serve equitable resource distribution
Inferences
Critique of wasteful software as enriching corporations suggests concern for fair resource distribution
Advocacy for efficiency implies concern for users' adequate access to computational resources
+0.20
Article 29Duties to Community
Medium Advocacy
Editorial
+0.20
SETL
ND
Content promotes responsibility to community through efficient software stewardship. Author frames wasteful implementations as irresponsible professional practice. Acknowledges community interdependence through funding credits.
FW Ratio: 60%
Observable Facts
Author frames professional responsibility as duty to avoid wasteful implementations
Funding section acknowledges 'generous contributions of our developer community'
Content emphasizes shared responsibility for software quality and efficiency
Inferences
Framing software efficiency as professional duty recognizes responsibility to community
Public acknowledgment of community contributions demonstrates understanding of interdependence
+0.10
Article 17Property
Medium Advocacy
Editorial
+0.10
SETL
-0.14
Content respects intellectual property through attribution of sources. Author credits Mike Burrows' nsync work and acknowledges that Cosmopolitan integrates third-party contributions.
FW Ratio: 60%
Observable Facts
Funding section explicitly credits GitHub sponsors, Patreon, Mozilla MIECO, and developer community
Author attributes mutex implementation to Mike Burrows and nsync library
Post states author made upstream contributions to nsync project
Inferences
Transparent funding attribution demonstrates respect for others' intellectual property and contributions
Public crediting of sources shows recognition of others' rights to their work
+0.10
Article 18Freedom of Thought
Medium Framing
Editorial
+0.10
SETL
ND
Post freely expresses independent technical opinion and analysis without constraint. Author presents critical evaluation of competing implementations.
FW Ratio: 67%
Observable Facts
Post contains independent technical benchmarking and comparative analysis
Author openly criticizes implementations of Microsoft, glibc, and musl without self-censorship
Inferences
Free expression of technical opinions demonstrates freedom of thought and conscience
+0.10
Article 24Rest & Leisure
Low Framing
Editorial
+0.10
SETL
ND
Content's emphasis on reducing CPU consumption and system load indirectly supports rest by reducing infrastructure burden and resource depletion.
FW Ratio: 50%
Observable Facts
Post emphasizes CPU time reduction as primary performance metric
Inferences
Promoting resource-efficient software reduces systemic computational overhead that affects all users
ND
PreamblePreamble
Content does not directly engage with universal human dignity or fundamental freedoms as framed in preamble
ND
Article 1Freedom, Equality, Brotherhood
No engagement with equal dignity or inalienable rights
ND
Article 2Non-Discrimination
No engagement with non-discrimination
ND
Article 3Life, Liberty, Security
No engagement with right to life, liberty, security
ND
Article 4No Slavery
No engagement with freedom from slavery
ND
Article 5No Torture
No engagement with freedom from torture or cruel treatment
ND
Article 6Legal Personhood
No engagement with right to recognition as person
ND
Article 7Equality Before Law
No engagement with equal protection of law
ND
Article 8Right to Remedy
No engagement with remedy for rights violations
ND
Article 9No Arbitrary Detention
No engagement with freedom from arbitrary arrest
ND
Article 10Fair Hearing
No engagement with right to fair trial
ND
Article 11Presumption of Innocence
No engagement with rights of the accused
ND
Article 12Privacy
No engagement with privacy or family rights
ND
Article 13Freedom of Movement
No engagement with freedom of movement
ND
Article 14Asylum
No engagement with right to asylum
ND
Article 15Nationality
No engagement with right to nationality
ND
Article 16Marriage & Family
No engagement with family rights
ND
Article 20Assembly & Association
No engagement with freedom of peaceful assembly
ND
Article 21Political Participation
No engagement with right to participation in government
ND
Article 22Social Security
No engagement with social security
ND
Article 28Social & International Order
No engagement with social and international order for human rights
ND
Article 30No Destruction of Rights
No engagement with prohibition on use of rights to destroy other rights
Structural Channel
What the site does
+0.30
Article 19Freedom of Expression
High Advocacy Framing
Structural
+0.30
Context Modifier
ND
SETL
+0.32
Content is publicly accessible without login, paywall, or prior approval requirement. Site structure supports free publication and open discourse.
+0.30
Article 27Cultural Participation
High Advocacy Framing
Structural
+0.30
Context Modifier
ND
SETL
+0.20
Site structure enables scientific participation through open access, GitHub repository references, and acknowledgment of community contributors. Funding section credits 'developer community on Discord' enabling collaborative participation.
+0.20
Article 17Property
Medium Advocacy
Structural
+0.20
Context Modifier
ND
SETL
-0.14
Funding section explicitly names sponsors and contributors, demonstrating transparent attribution and respect for others' property rights and labor
+0.20
Article 26Education
High Advocacy Framing
Structural
+0.20
Context Modifier
ND
SETL
+0.28
Educational content freely accessible without payment or login. Source code provided inline. References external learning resources and GitHub repositories.
ND
PreamblePreamble
No structural engagement with preamble principles
ND
Article 1Freedom, Equality, Brotherhood
No structural engagement
ND
Article 2Non-Discrimination
No structural engagement
ND
Article 3Life, Liberty, Security
No structural engagement
ND
Article 4No Slavery
No structural engagement
ND
Article 5No Torture
No structural engagement
ND
Article 6Legal Personhood
No structural engagement
ND
Article 7Equality Before Law
No structural engagement
ND
Article 8Right to Remedy
No structural engagement
ND
Article 9No Arbitrary Detention
No structural engagement
ND
Article 10Fair Hearing
No structural engagement
ND
Article 11Presumption of Innocence
No structural engagement
ND
Article 12Privacy
No structural engagement
ND
Article 13Freedom of Movement
No structural engagement
ND
Article 14Asylum
No structural engagement
ND
Article 15Nationality
No structural engagement
ND
Article 16Marriage & Family
No structural engagement
ND
Article 18Freedom of Thought
Medium Framing
No direct structural engagement
ND
Article 20Assembly & Association
No structural engagement
ND
Article 21Political Participation
No structural engagement
ND
Article 22Social Security
No structural engagement
ND
Article 23Work & Equal Pay
Medium Advocacy
No direct structural engagement with work conditions
ND
Article 24Rest & Leisure
Low Framing
No structural engagement
ND
Article 25Standard of Living
Medium Advocacy
No structural engagement
ND
Article 28Social & International Order
No structural engagement
ND
Article 29Duties to Community
Medium Advocacy
No structural engagement
ND
Article 30No Destruction of Rights
No structural engagement
Supplementary Signals
How this content communicates, beyond directional lean. Learn more
build 73de264+3rh4 · deployed 2026-02-28 13:33 UTC · evaluated 2026-02-28 13:38:33 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.