672 points by pjmlp 1032 days ago | 238 comments on HN
| Neutral Editorial · v3.7· 2026-02-28 14:13:51
Summary Content Inaccessible Neutral
Evaluation could not be completed. The provided page content contains only Wayback Machine interface markup and navigation elements, with no substantive article text about WebGPU. The fetch likely captured the archive wrapper due to JavaScript-rendering requirements, preventing access to the original cohost.org post for UDHR alignment assessment.
The article has a humorous history of graphics APIs that I very much enjoyed. I did the the Vulkan tutorial for kicks one month on Windows (with an nVidia GPU) and it was no joke, super fiddly to do things in. I look forward to trying WebGL in anything that isn't JavaScript (some language that compiles to WebASM or transpiles, I guess).
> ¹² If I were a cynical, paranoid conspiracy theorist, I would float the theory here that Apple at some point decided they wanted to leave open the capability to sue the other video card developers on the Khronos board, so they are aggressively refusing to let their code touch anything that has touched the Vulkan patent pool to insulate themselves from counter-suits. Or that is what I would say if I were a cynical, paranoid conspiracy theorist. Hypothetically.
Such a shame that they lobbied against SPIR-V on the web. Textual formats are evil.
> The middleware developers were also in the room at the time, the Unity and Epic and Valve guys. They were beaming as the Khronos guy explained this. Their lives were about to get much, much easier.
Lol, I wonder what their opinion is now 8 years later (at least those who haven't been burned out by Vulkan).
> In fact it is so good I think it will replace Vulkan as well as normal OpenGL, and become just the standard way to draw, in any kind of software, from any programming language.
I fully agree with that, for a lot of use cases WebGL has everything you need, means it has the potential to become the cross platform graphics API OpenGL dreamed to be. And as a bonus you have a realistic way to run whatever app you are writing in the browser with WASM+WebGL.
I just think for AAA games Vulcan, Metal and DirectX12 will probably still be the way to go. But GUI libraries? Less highest end games? There is just no point once you can use WebGL everywhere. And then if you want to have a browser Demo you have a realistic chance to get it.
I suspect this article may even be underestimating the impact of WebGPU. I'll make two observations.
First, for AI and machine learning type workloads, the infrastructure situation is a big mess right now unless you buy into the Nvidia / CUDA ecosystem. If you're a research, you pretty much have to, but increasingly people will just want to run models that have already been trained. Fairly soon, WebGPU will be an alternative that more or less Just Works, although I do expect things to be rough in the early days. There's also a performance gap, but I can see it closing.
Second, for compute shaders in general (potentially accelerating a large variety of tasks), the barrier to entry falls dramatically. That's especially true on web deployments, where running your own compute shader costs somewhere around 100 lines of code. But it becomes practical on native too, especially Rust where you can just pull in a wgpu dependency.
As for text being one of the missing pieces, I'm hoping Vello and supporting infrastructure will become one of the things people routinely reach for. That'll get you not just text but nice 2D vector graphics with fills, strokes, gradients, blend modes, and so on. It's not production-ready yet, but I'm excited about the roadmap.
[Note: very lightly adapted from a comment at cohost; one interesting response was by Tom Forsyth, suggesting I look into SYCL]
I believe main use of WebGPU, as well as WebGL will be fingerprinting therefore they should be put behing a permission. It is ridiculous that browser developers do nothing to reduce API surface usable for fingerprinting and instead just extend it.
> so reportedly Apple just got absolutely everything they asked for and WebGPU really looks a lot like Metal
...tbh, I wish WebGPU would look even more like Metal, because the few parts that are not inpired by Metal kinda suck (for instance the baked BindGroup objects - which requires to know upfront what resource combinations will be needed at draw time, or otherwise create and discard BindGroup objects - which are actual 'heavy weight' Javascript objects - on the fly).
As often, the guessing about historical reasoning could be all over the place. And not just about WebGPU (oh that never ending “why not SPIRV?” discussion). Claiming that Vulkan and D3D12 were kicked off by a GDC talk about AZDO sounds ridiculous to me. These APIs are about explicit control, they allowed to talk to the driver more and better, which is the opposite of AZDO approach in a way.
Anyway, congratulations to WebGPU release on stable Chrome on some of the platforms! Looking forward to see it widely available.
>a lot of the Linux devices I've been checking out don't support Vulkan even now after it's been out seven years. So really the only platform where Vulkan runs natively is Android
I got really curious about this. To my understanding, I have been using Vulkan on my Linux desktop computers for quite some time now. What Linux devices could the author mean?
Thanks for writing this article! I am super excited about WebGPU. One not-so-fancy prospect worth commenting about on HN, though, is replacing Electron.
With WASM-focused initiatives to create hardware accelerated UI in the browser, we may soon see a toolchain that deploys to a WebGPU canvas and WASM in the browser, deploys native code linked to WGPU outside the browser, and gives the industry a roadmap to Electron-style app development without the Chromium overhead.
I love the level of detail in this post, really helpful for filling in a lot of the gaps in my knowledge about the history and motivations at play. This is exactly the kind of post I love to see on HN.
I haven't played with WebGPU much (coughLinux supportcough) but I'm looking forward to it. And I generally agree that even more than the API itself, having GPU code that's easily portable between the web and native languages is a pretty big deal to me.
If the browser ever becomes a popular platform for mainstream games then I suspect the quality of web applications will also boom just because of all the talent pouring into the browser as a platform.
This was the most well-written technical piece I've read in a long time. Kudos Andi. The conversational tone, imagery, well-sourced history - it was perfect.
It was interesting to read that Rust has a WebGPU implementation [0] outside of the browser. I wonder if this will become a more general standard.
Anyone know how WebGPU compares to CUDA in terms of performance and functionality?
I do not expect this to happen at all. WepGPU (including its shading language) is a subset of Vulkan. Furthermore, it is up to the runtime to expose vendor extensions to the code (as one example, Node supports ray tracing, but nothing else does). This means that WebGPU will be perpetually behind Vulkan.
That being said, if WebGPU does what you need then don't bother with Vulkan.
It's the usual story with web APIs, the implementation probably has resource quotas to keep applications from accidentally or deliberately doing denial of service, but you the developer targeting that implementation aren't allowed to know what the quotas are. You just have to make an educated guess at how much you can get away with and hope for the best. WebAssembly has a similar issue where consuming too much memory will get your app killed, but there's no reliable way to know how much is "too much" until it's too late to recover.
That's probably already possible with just a huge HTML page? At least on my system, if I create such a page and open it via a file:// URL, firefox will happily gobble up memory.
So wait... do we now have a situation where the browser engine has converged with where the Java Virtual Machine was and provided a container to run write-once-run-anywhere desktop apps compiled to WASM?
All we need is the last mile -- progressive web apps -- to include better support for integration with desktop OSes and we have a way to take WASM apps and drag them to the desktop.
The up and coming languages for writing these WASM apps seem to be Go and Rust. Here's a Rust example:
I don’t know about WebGPU, but WebGL is missing some key performance features from OpenGL ES, like client-side buffers, pixel buffer objects, and MSAA render-to-texture frame buffers.
MS called all their multimedia/gaming APIs DirectSomething for a while, then decided to group it all together into DirectX. It was also a time when you had to put an X into everything because you were XTREME.
This is the discussion I hoped to find when clicking on the comments.
> Fairly soon, WebGPU will be an alternative...
So while the blog focused on the graphical utility of WebGPU, the underlying implementation of WebGPU is currently about the way that websites/apps can now interface with the GPU in a more direct/advantageous way to render graphics.
But what you're suggesting is that in the future new functionality will likely be added to take advantage of your GPU in other ways, such as training ML models and then using them via an inference engine all powered by your local GPU?
Is the reason you can't accomplish that today bc APIs haven't been created or opened up to allow such workloads? Are there not lower level APIs available/exposed today in WebGPU that would allow developers to begin the design of browser based ML frameworks/libraries?
Was it possible to interact with the GPU before WebGPU via Web Assembly?
Other than ML and graphics/games (and someone is probably going to mention crypto), are there any other potentially novel uses for WebGPU?
While I don't disagree with the essence of what you're concerned about, I imagine that every new step/functionality in the browser evolution introduces new minutiae that can be used in fingerprinting.
Also, once/if the WebGPU interface expands into more direct GPU functionality, I imagine there will many that will try to use your GPU for crypto mining/etc.
But surely a permission option/prompt will eventually be introduced, but probably not by google in chrome. Anyone know the roadmap/timeline for WebGPU on Firefox and other browsers?
People keep saying “webgpu will just be used for fingerprinting” here on HN as a meme constantly but it’s clearly not the case. There’s obvious use cases not involving finger printing for webgl, and webgpu has tons of convenience functions that makes it superior to webgl to make things nearly impossible otherwise (eg atomics to name just one)
Could someone explain what kinds of useful things will become possible with it?
I don't get it yet, but HN seems excited about it, so I'd like to understand it.
What I get so far - running models that can be run on consumer sized GPUs will become easier, because users won't need to download a desktop app to do so. This is limited for now by the lack of useful models that can be run on consumer GPUs, but we'll get smaller models in the future. And it'll be easier to make visualizations, games, and VR apps in the browser. Is that right and what other popular use cases where people currently have to resort to WebGL or building a desktop app will get easier that I'm missing?
WebGPU has no equivalent to tensor cores to my understanding; are there plans to add something like this? Or would this be "implementation sees matmul-like code; replaces with tensor core instruction". For optimal performance, my understanding is that you need tight control of e.g. shared memory as well -- is that possible with WebGPU?
On NVIDIA GPUs, flops without tensor cores are ~1/10th flops with tensor cores, so this is a pretty big deal for inference and definitely for training.
Coming at it from a graphics processing perspective, working on a lot of video editing, it's annoying that just as GPUs start to become affordable as people turn their back on cryptobro idiocy and stop chasing the Dunning-Krugerrand, they've started to get expensive again because people want hardware-accelerated Eliza chatbots.
Anyway your choices for GPU computing are OpenCL and CUDA.
If you write your project in CUDA, you'll wish you'd used OpenCL.
If you write your project in OpenCL, you'll wish you'd used CUDA.
This is also my fear, but... I don't know, I think the potential outweighs the risks.
In some ways the problem with everyone trying to render to canvases and skip the DOM starts from education and a lack of native equivalents to the DOM that really genuinely showcase the strengths beyond having a similar API. I think developers come into the web and they have a particular mindset about graphics that pushes them away from "there should be a universal semantic layer that I talk to and also other things might talk to it", and instead the mindset is "I just want to put very specific pixels on the screen, and people shouldn't be using my application on weird screen configurations or with random extensions/customizations anyway."
And I vaguely think that's something that needs to be solved more by just educating developers. It'll be a problem until something happens and native platforms either get support for a universal semantic application layer that's accessible to the user and developers start seeing the benefits of that, or... I don't know. That's maybe a long conversation. But there has to be a shift, I don't think it's feasible to just hold off on GPU features. At some point native developers need to figure out why the DOM matters or we'll just keep on having this debate.
People wanting to write code that runs on both native and the web is good, it's a reasonable instinct. Electron-style app development isn't a bad goal. It's just how those apps get developed and what parts of the web get thrown out because developers aren't considering them to be important.
The only possible application I can imagine for that would be videogames, though.
Because HTML+CSS+JS provides a fantastic cross-platform UI toolkit that everybody knows how to use.
Videogames create their own UI in order to have lots of shiny effects and a crazy immersive audio-filled controller-driven experience... but non-videogames don't need or want that.
Heck, I'm actually expecting the opposite -- for the entire OS interface to become based on Chromium HTML+CSS+JS, and eventually Electron apps don't bundle a runtime, because they're just apps. My Synology DSM is an entire operating system whose user interface runs in the browser and it just... makes sense.
So much this. Metal is so elegant to use. I've tried reading through Vulkan docs and tutorials, and it's so confusing.
Also, this seems like some major revisionist history:
>This leads us to the other problem, the one Vulkan developed after the fact. The Apple problem. The theory on Vulkan was it would change the balance of power where Microsoft continually released a high-quality cutting-edge graphics API and OpenGL was the sloppy open-source catch up. Instead, the GPU vendors themselves would provide the API, and Vulkan would be the universal standard while DirectX would be reduced to a platform-specific oddity. But then Apple said no. Apple (who had already launched their own thing, Metal) announced not only would they never support Vulkan, they would not support OpenGL, anymore.
What I remember happening was that Apple was all-in on helping Khronos come up with what would eventually become Vulkan, but Khronos kept dragging their feet on getting something released. Apple finally got fed up and said, "We need something shipping and we need it now." So they just went off and did it themselves. Direct X 12 seemed like a similar response from Microsoft. It always seemed to me that Vulkan had nobody but themselves to blame for these other proprietary libraries being adopted.
Yes. Kvark pushed WGPU as a cross-platform graphics base for Rust, and that worked out quite well.
It's actually better in an application than in the browser. In an application, you get to use real threads and utilize the computer's full resources, both CPUs and GPUs. In browsers, the Main Thread is special,
you usually can't have threads at different priorities, there's much resource limiting, and the Javascript callback mindset gets in the way.
Here's video from my metaverse viewer, which uses WGPU.[1] This seems to be, much to my surprise, the most photo-realistic game-type 3D graphics application yet written in Rust.
The stack for that is Egui (for 2D menus), Rend3 (for a lightweight scene graph and memory management), WGPU (as discussed above), Winit (cross-platform window event management), and Vulkan. Runs on both Linux and Windows. Should work on MacOS, but hasn't been debugged there. Android has a browser-like thread model, so, although WGPU supports those targets, this program won't work on Android or WASM. All this is in Rust.
It's been a painful two year experience getting that stack to work. It suffers from the usual open source problem - everything is stuck at version 0.x, sort of working, and no longer exciting to work on. The APIs at each level are not stable, and so the versions of everything have to be carefully matched. When someone changes one level, the other levels have to adapt, which takes time. Here's a more detailed discussion of the problems.[2] The right stuff is there, but it does not Just Work yet. Which is why we're not seeing AAA game titles written in Rust. You can't bet a project with a deadline on this stack yet. As you can see from the video, it does do a nice job.
It's encouraging that WGPU is seen as a long-term solution, because it improves the odds of that work getting completed.
Still, I was chatting with one of the few people to ship a good 3D game (a yacht racing simulator) in Rust, and he admits that he simply used "good old DX11".
Kinda doubt it's that great for fingerprinting. Look at something like WebGL report and any semi-modern system will support everything. "Computer was made in the last 10 years" isn't that useful. Even knowing the exact GPU... we're talking maybe 5 major vendors? Seems like if you really care for anonymity you need a vpn and a special browser at this point anyway.
build aba2bc8+myve · deployed 2026-02-28 16:36 UTC · evaluated 2026-02-28 16:29:11 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.