746 points by pjmlp 142 days ago | 557 comments on HN
| Mild positive Editorial · v3.7· 2026-02-28 13:12:23
Summary Technical Transparency & Open Science Acknowledges
A technical blog post benchmarking Python 3.14 performance. The article demonstrates exemplary practices in open communication, education, and reproducible research through transparent methodology, published source code, and explicit acknowledgment of limitations. It implicitly supports UDHR principles of freedom of expression, access to knowledge, and open intellectual contribution, though engagement is primarily technical rather than human-rights-focused.
I'm thankful they included a compiled language for comparison, because most of the time when I see Python benchmarks, they measure against other versions of Python. But "fast python" is an oxymoron and 3.14 doesn't seem to really change that, which I feel most people expected given the language hasn't fundamentally changed.
This isn't a bad thing; I don't think Python has to be or should be the fastest language in the world. But it's interesting to me seeing Python getting adopted for a purpose it wasn't suited for (high performance AI computing). Given how slow it is, people seem to think there's a lot of room for performance improvements. Take this line for instance:
> The free-threading interpreter disables the global interpreter lock (GIL), a change that promises to unlock great speed gains in multi-threaded applications.
No, not really. I mean, yeah you might get some speed gains, but the chart shows us if you want "great" speed gains you have two options: 1) JIT compile which gets you an order of magnitude faster or 2) switch to a static compiled language which gets you two orders of magnitude faster.
But there doesn't seem to be a world where they can tinker with the GIL or optimize python such that you'll approach JIT or compiled perf. If perf is a top priority, Python is not the language for you. And this is important because if they change Python to be a language that's faster to execute, they'll probably have to shift it away from what people like about it -- that it's a dynamic, interpreted language good for prototyping and gluing systems together.
Tangential, but I practically owe my life to this guy. He wrote the flask mega tutorial in what I followed religiously to launch my first website. Then right before launch, in the most critical part of my entire application; piping a fragged file in flask. He answered my stackoverflow question, I put his fix live, and the site went viral. Here's the link for posterity's sake https://stackoverflow.com/a/34391304/4180276
That >2x performance increase over 3.9 in the first test is pretty impressive. A narrow use case for sure, but assuming you can leave your code completely alone and just have it run on a different interpreter via a few CLI commands, that's a nice bump.
seems loved languages such as python & ruby (ZJIT | TruffuleRuby) have been getting a lot performance improvements lately. of course JS with v8 kickstarted this - followed by PHP.
so for majority of us folks use what you love - the performance will come.
I don't know how realistic only using a benchmark that only uses tight loops and integer operations. Something with hashmaps and strings more realistically represents everyday cpu code in python; most python users offload numeric code to external calls.
Please don’t make benchmarks with timing inside the loop creating a sum. Just time the loop and divide by the number. Stuff happens getting the time and the jitter can mess with results.
Really pleasing to see how smooth the non-GIL transition was. If you think about 2->3 python this was positively glorious.
And that it gets into spitting range of standard so fast is really promising too. That hopefully means the part not compatible with it get flushed out soon-ish
I feel like Python should be much faster already. With all the big companies using Python and it's huge popularity I would have expected that a lot of money, work and research would be put into making Python faster and better.
More than 300 comments here and still no convincing answer. Why the community wastes time on trying to make CPython faster when there is pypy which is already much faster? I understand pypy lacks libraries and feature parity with up to date CPython. But… can’t everyone refocus the efforts and just move to pypy to add all the missing bits and then just continue with pypy as the “official python”? Are there any serious technical reasons not to do it?
For me the "criminal" thing is that Pypy exists on a shoestring and yet delivers the performance and multithreading that others gradually try to add to cpython.
It's problem is, IMO, compatibility. Long ago I wanted to run it on yocto but something or other didn't work. I think this problem is gradually disappearing but it could be solved far more rapidly with a bit of money and effort probably.
I agree. Unless they make it like 10x faster it doesn't really change anything. It's still a language you only use if you absolutely don't care whatsoever about performance and can guarantee that you never will.
I've been writing Python professionally for a couple of decades, and there've only been 2-3 times where its performance actually mattered. When writing a Flask API, the timing usually looks like: process the request for .1ms, make a DB call for 300ms, generate a response for .1ms. Or writing some data science stuff, it might be like: load data from disk or network for 6 seconds, run Numpy on it for 3 hours, write it back out for 3 seconds.
You could rewrite that in Rust and it wouldn't be any faster. In fact, a huge chunk of the common CPU-expensive stuff is already a thin wrapper around C or Rust, etc. Yeah, it'd be really cool if Python itself were faster. I'd enjoy that! It'd be nice to unlock even more things that were practical to run directly in Python code instead of swapping in a native code backend to do the heavy lifting! And yet, in practice, its speed has almost never been an issue for me or my employers.
BTW, I usually do the Advent of Code in Python. Sometimes I've rewritten my solution in Rust or whatever just for comparison's sake. In almost all cases, choice of algorithm is vastly more important than choice of language, where you might have:
* Naive Python algorithm: 43 quadrillion years
* Optimal Python algorithm: 8 seconds
* Rust equivalent: 2 seconds
Faster's better, but the code pattern is a lot more important than the specific implementation.
> That said, I wonder if GIL-less Python will one day enable GIL-less C FFI? That would be a big win that Python needs.
I'm pretty sure that is what freethreading is today? That is why it can't be enabled by default AFAIK, as several C FFI libs haven't gone "GIL-less" yet.
Keep in mind that the two scripts that I used in my benchmark are written in pure Python, without any dependencies. This is the sweet spot for pypy. Once you start including dependencies that have native code their JIT is less efficient. Nevertheless, the performance for pure Python code is out of this world, so I definitely intend to play more with it!
Because in the real world, for code where performance is needed, you run the profiler and either find that the time is spent on I/O, or that the time is spent inside native code.
It doesn't play nice with a lot of popular Python libraries. In particular, many popular Python libraries (NumPy, Pandas, TensorFlow, etc.) rely on CPython’s C API which can cause issues.
> [Donald Knuth] firmly believes that having an unchanged system that will produce the same output now and in the future is more important than introducing new features
This is such a breath of fresh air in a world where everything is considered obsolete after like 3 years. Our industry has a disease, an insatiable hunger for newness over completeness or correctness.
There's no reason we can't be writing code that lasts 100 years. Code is just math. Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago? LOL dude get with the times, everyone uses Equately for their equations now. It was made by 3 interns at Facebook, so it's pretty much the new hotness." No, I don't think I will use "Equately", I think I'll stick to the tried-and-true idea that has been around for 3000 years.
Forget new versions of everything all the time. The people who can write code that doesn't need to change might be the only people who are really contributing to this industry.
Speaking only for myself, and in all sincerity: every year, there is some feature of the latest CPython version that makes a bigger difference to my work than faster execution would. This year I am looking forward to template strings, zstd, and deferred evaluation of annotations.
It's pretty simple. Nobody wants to do ML R&D in C++.
Tensorflow is a C++ library with python bindings. Pytorch has supported a C++ interface for some time now, yet virtually nobody uses C++ for ML R&D.
The relationship between Python and C/C++ is the inverse of the usual backend/wrapper cases. C++ is the replaceable part of the equation. It's a means to an end. It's just there because python isn't fast enough. Nobody would really care if some other high perf language took its place.
Speed is important, but C++ is even less suited for ML R&D.
Off-topic, but I absolutely loathe new Flask logo. Old one[0] has this vintage, crafty feel. And the new one[1] looks like it was made by a starving high schooler experimenting with WordArt.
For anyone else wondering whether to click to find what "fragged file" means: no, it's not about Quake and the linked page does not mention 'frag' at all. The question asks how to stream a file to the client in Flask as opposed to reading it all into memory at once and then sending it on. I figured as much (also because e.g. IP fragmentation) but first time I hear this alternative term for streaming
A lot of Python use cases don't care about CPU performance at all.
In most cases where you do care about CPU performance, you're using numpy or scikit learn or pandas or pytorch or tensorflow or nltk or some other Python library that's more or just a wrapper around fast C, C++ or Fortran code. The performance of the interpreter almost doesn't matter for these use cases.
Also, those native libraries are a hassle to get to work with PyPy in my experience. So if any part of your program uses those libraries, it's way easier to just use CPython.
There are cases where the Python interpreter's bad performance does matter and where PyPy is a practical choice, and PyPy is absolutely excellent in those cases. They just sadly aren't common and convenient enough for PyPy to be that popular. (Though it's still not exactly unpopular.)
Or have it run some super common use case like a FastAPI endpoint or a numpy calculation. Yes, they are not all python, but it's what most people use Python for.
As someone who was a hardcore python fanboy for a long time, no, no it won't. There are classes of things that you can only reasonably do in a language like rust, or where go/kotlin will save you a crazy amount of pain. Python is fine for orchestration and prototyping, but if it's the only arrow you have in your quiver you're in trouble.
The build of Python that I used has tail calls enabled (option --with-tail-call-interp). So that was in place for the results I published. I'm not sure if this optimization applies to recursive tail calls, but if it does, my Fibonacci test should have taken advantage of the optimization.
There is no "realistic" benchmark, all benchmarks are designed to measure in a specific way. I explain what my goals were in the article, in case you are curious and want to read it.
I agree with you, this is not an in depth look, could have been much more rigorous.
But then I think in some ways it's a much more accurate depiction of my use case. I mainly write monte-carlo simulations or simple scientific calculations for a diverse set of problems every day. And I'm not going to write a fast algorithm or use an unfamiliar library for a one-off simulation, even if the sim is going to take 10 minutes to run (yes I use scipy and numpy, but often those aren't the bottlenecks). This is for the sake of simplicity as I might iterate over the assumptions a few times, and optimized algorithms or library impls are not as trivial to work on or modify on the go. My code often looks super ugly, and is as laughably unoptimized as the bubble sort or fib(40) examples (tail calls and nested for loops). And then if I really need the speed I will take my time to write some clean cpp with zmq or pybind or numba.
Well, they added an experimental JIT so that is one step closer to PyPy? Though would assume the trajectory is build a new JIT vs. merge in PyPy, but hopefully people learned a lot from PyPy.
Article provides transparent educational content on benchmarking methodology with concrete code examples, explicit reasoning about test design, repeated cautions about limitations, and encouragement for reader learning.
FW Ratio: 60%
Observable Facts
Article includes code examples (fibo.py, bubble.py) with explanations: 'The framework that I built for running this benchmark executes each test function three times and reports the average time.'
Limitations stated for educational clarity: 'it is really impossible to build an accurate performance profile of something as complex as the Python interpreter.'
Article explains test design rationale: 'I have chosen these functions mainly because one is recursive and the other is not, so that I have two different coding styles.'
Inferences
Transparent methodology with concrete examples and repeated cautions supports knowledge transfer and critical thinking.
Accessibility to intermediate programmers while teaching advanced principles demonstrates commitment to education.
+0.30
Article 19Freedom of Expression
Medium Framing Practice
Editorial
+0.30
SETL
+0.17
Content demonstrates transparent communication through explicit methodology disclosure, public code sharing, and invitation for reader feedback. Exemplifies freedom to express technical ideas and engage in open dialogue.
FW Ratio: 60%
Observable Facts
Page states: 'have a look at my benchmark, but consider it just one data point and not the last word on Python performance!'
Page discloses: 'The complete test scripts along with the benchmark scripts are available on the GitHub repository.'
Article invites reader input: 'Let me know in the comments if you have results that are different than mine.'
Inferences
Explicit methodology and source code availability demonstrate transparency consistent with freedom of expression principles.
Invitation for reader feedback creates space for dialogue and diverse perspectives on benchmark interpretation.
+0.30
Article 27Cultural Participation
Medium Advocacy Practice
Editorial
+0.30
SETL
+0.17
Content advocates for reproducible research by publishing benchmark scripts without proprietary claims, supporting transparent intellectual contribution and open methodology.
FW Ratio: 67%
Observable Facts
Page discloses: 'The complete test scripts along with the benchmark scripts are available on the GitHub repository.'
Author does not claim proprietary ownership of benchmarking approach or restrict access to testing code.
Inferences
Public availability of methodology and reproducible scripts demonstrates alignment with open-science principles.
+0.20
Article 29Duties to Community
Medium Practice
Editorial
+0.20
SETL
+0.14
Article demonstrates intellectual responsibility through repeated cautions about benchmark limitations and invitations for reader correction, acknowledging community dimension of knowledge.
FW Ratio: 67%
Observable Facts
Article states: 'Yes, even though I'm going to share the results of my benchmark, I feel I have to warn you again...that generic benchmarks like this one are not really very useful.'
Repeated caveat: 'have a look at my benchmark, but consider it just one data point.'
Author invites correction: 'Let me know in the comments if you have results that are different than mine.'
Article acknowledges limits: 'it is really impossible to build an accurate performance profile of something as complex as the Python interpreter.'
Inferences
Repeated cautions and explicit acknowledgment of limitations demonstrate intellectual responsibility and accountability.
Invitation for reader feedback and correction reflects commitment to communal learning and knowledge refinement.
-0.10
Article 23Work & Equal Pay
Low Advocacy
Editorial
-0.10
SETL
ND
Content discusses professional software development workloads without reference to labor rights, fair wages, worker protections, or unionization. Work context treated purely as technical optimization problem.
FW Ratio: 67%
Observable Facts
Article discusses 'real-world applications' and 'multi-threaded applications' in professional software development contexts.
No discussion of labor rights, wages, worker protection, or worker welfare appears in the content.
Inferences
Focus on technical optimization without reference to labor dimensions represents absence of labor-rights framing in work context.
ND
PreamblePreamble
No content addressing UDHR preamble themes.
ND
Article 1Freedom, Equality, Brotherhood
No discussion of human equality or fundamental rights.
ND
Article 2Non-Discrimination
No discussion of freedom from discrimination.
ND
Article 3Life, Liberty, Security
No discussion of right to life, liberty, or security.
ND
Article 4No Slavery
No discussion of slavery or forced labor.
ND
Article 5No Torture
No discussion of torture or cruel treatment.
ND
Article 6Legal Personhood
No discussion of legal personhood or civil status.
ND
Article 7Equality Before Law
No discussion of equal protection under law.
ND
Article 8Right to Remedy
No discussion of access to legal remedies.
ND
Article 9No Arbitrary Detention
No discussion of freedom from arbitrary detention.
ND
Article 10Fair Hearing
No discussion of fair trial rights.
ND
Article 11Presumption of Innocence
No discussion of presumption of innocence.
ND
Article 12Privacy
No discussion of privacy, family, or home.
ND
Article 13Freedom of Movement
No discussion of freedom of movement.
ND
Article 14Asylum
No discussion of asylum or refuge.
ND
Article 15Nationality
No discussion of nationality or citizenship.
ND
Article 16Marriage & Family
No discussion of marriage or family rights.
ND
Article 17Property
No discussion of property rights.
ND
Article 18Freedom of Thought
No discussion of freedom of conscience or religion.
ND
Article 20Assembly & Association
No discussion of peaceful assembly or association rights.
ND
Article 21Political Participation
No discussion of democratic participation or governance.
ND
Article 22Social Security
No discussion of social security or welfare rights.
ND
Article 24Rest & Leisure
No discussion of rest, leisure, or work-life balance.
ND
Article 25Standard of Living
No discussion of adequate standard of living, food, health, or housing.
ND
Article 28Social & International Order
No discussion of international order or social systems.
ND
Article 30No Destruction of Rights
No discussion of abuse of rights or misuse.
Structural Channel
What the site does
+0.30
Article 26Education
Medium Advocacy Practice
Structural
+0.30
Context Modifier
ND
SETL
+0.20
Blog structure supports learning through public code repository, GitHub links for reproducibility, and open comments for peer discussion. Free access to educational content.
+0.20
Article 19Freedom of Expression
Medium Framing Practice
Structural
+0.20
Context Modifier
ND
SETL
+0.17
Blog structure enables open communication via public comments section and GitHub repository access. Content is freely accessible without restrictions.
+0.20
Article 27Cultural Participation
Medium Advocacy Practice
Structural
+0.20
Context Modifier
ND
SETL
+0.17
GitHub repository provides public access to benchmark code and methodology without proprietary restrictions or paywalls.
+0.10
Article 29Duties to Community
Medium Practice
Structural
+0.10
Context Modifier
ND
SETL
+0.14
Blog structure enables community feedback via comments section; author participates in clarifying responses.
ND
PreamblePreamble
No structural signals regarding UDHR principles.
ND
Article 1Freedom, Equality, Brotherhood
No structural signals regarding equal dignity.
ND
Article 2Non-Discrimination
No structural signals regarding anti-discrimination.
ND
Article 3Life, Liberty, Security
No structural signals regarding life and security.
ND
Article 4No Slavery
No structural signals regarding slavery prohibition.
ND
Article 5No Torture
No structural signals regarding torture prohibition.
ND
Article 6Legal Personhood
No structural signals regarding legal recognition.
ND
Article 7Equality Before Law
No structural signals regarding legal equality.
ND
Article 8Right to Remedy
No structural signals regarding justice access.
ND
Article 9No Arbitrary Detention
No structural signals regarding detention prohibition.
ND
Article 10Fair Hearing
No structural signals regarding due process.
ND
Article 11Presumption of Innocence
No structural signals regarding criminal justice.
ND
Article 12Privacy
No structural signals regarding privacy protection.
ND
Article 13Freedom of Movement
No structural signals regarding movement rights.
ND
Article 14Asylum
No structural signals regarding asylum rights.
ND
Article 15Nationality
No structural signals regarding nationality rights.
ND
Article 16Marriage & Family
No structural signals regarding family protection.
ND
Article 17Property
No structural signals regarding property protection.
ND
Article 18Freedom of Thought
No structural signals regarding conscience rights.
ND
Article 20Assembly & Association
No structural signals regarding assembly or association.
ND
Article 21Political Participation
No structural signals regarding democratic participation.
ND
Article 22Social Security
No structural signals regarding social welfare.
ND
Article 23Work & Equal Pay
Low Advocacy
No structural signals regarding labor rights or worker welfare.
ND
Article 24Rest & Leisure
No structural signals regarding rest and leisure rights.
ND
Article 25Standard of Living
No structural signals regarding welfare or living standards.
ND
Article 28Social & International Order
No structural signals regarding international order.
ND
Article 30No Destruction of Rights
No structural signals regarding prohibition of abuse.
Supplementary Signals
How this content communicates, beyond directional lean. Learn more
build 6157e1d+ai0o · deployed 2026-02-28 16:55 UTC · evaluated 2026-02-28 16:29:11 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.