logoalt Hacker News

root_axistoday at 5:01 AM3 repliesview on HN

I'm surprised to see it's that much faster than SWC. Does anyone have any general details on how that performance is achieved?


Replies

snowhaletoday at 1:23 PM

arena allocation is a big part of it, but also oxc benefits from not having to support the same breadth of legacy transforms that swc accumulated over time. swc has a lot of surface area from being the go-to babel replacement -- oxc could design the AST shape from scratch with allocation patterns in mind. the self-hosting trap (writing js tooling in js) set a performance ceiling for so long that when you finally drop down to Rust and rethink the data layout, the gains feel almost unfair

grabshot_devtoday at 11:12 AM

One thing worth noting: beyond raw parse speed, oxc's AST is designed to be allocation-friendly with arena allocation. SWC uses a more traditional approach. In practice this means oxc scales better when you're doing multiple passes (lint + transform + codegen) on the same file because you avoid a ton of intermediate allocations.

We switched a CI pipeline from babel to SWC last year and got roughly 8x improvement. Tried oxc's transformer more recently on the same codebase and it shaved off another 30-40% on top of SWC. The wins compound when you have thousands of files and the GC pressure from all those AST nodes starts to matter.

ameliaquiningtoday at 5:46 AM

They wrote a post (https://oxc.rs/docs/learn/performance) but it doesn't include direct comparisons to SWC.

show 1 reply