logoalt Hacker News

hbogertyesterday at 9:07 AM3 repliesview on HN

I always have the unfounded feeling that the go compiler/linker does not remove dead code. Go binaries have large minimal size. Tinygo in contrast can make awesome small binaries


Replies

clktmryesterday at 9:55 AM

It's pretty good at dead code elimination. The size of Go binaries is in large part because of the runtime implementation. Remove a bunch of the runtime's features (profiling, stacktraces, sysmon, optimizations that avoid allocations, maybe even multithreading...) and you'll end up with much smaller binaries. I would love if there was a build tag like "runtime_tiny", that provides such an implementation.

show 1 reply
jrockwayyesterday at 5:37 PM

I think it depends on the codebase. There are some reflection calls that you can make that can cause dead code elimination to fail, thought I believe it's less easy to run into than it was a few years ago. One common dependency, at least in my line of work, is the Kubernetes API and it manages to both be gigantic and trigger this edge case (last I looked), so yeah, the binaries end up pretty big.

Another thing that people run into is big binaries = slow container startup times. This time is mostly spent in gzip. If you use Zstandard layers instead of gzip layers, startup time is improved. gzip decompression is actually very slow, and the OCI spec no longer mandates it.

gethlyyesterday at 9:15 AM

Go has a runtime. That alone is over a megabyte. Tinygo on the other hand has very limited(smaller) runtime. In other words, you don't know what you're talking about.