logoalt Hacker News

floating-iotoday at 7:38 PM1 replyview on HN

You can have this problem with any kind of thread -- including OS threads -- if you do an unbounded spawn loop. Go is hardly unique in this.

Goroutines are actually better AFAIK because they distribute work on a thread pool that can be much smaller than the number of active goroutines.

If my quick skim created a correct understanding, then the problem here looks more like architecture. Put simply: does the memcached client really require a new TCP connection for every lookup? I would think you would pool those connections just like you would a typical database and keep them around for approximately forever. Then they wouldn't have spammed memcache with so many connections in the first place...

(edit: ah, it looks like they do use a pool, but perhaps the pool does not have a bounded upper size, which is its own kind of fail.)


Replies

slopinthebagtoday at 8:17 PM

Rust's async doesn't have this issue. Or at least, it's the same issue as malloc in an unbounded loop, but that's a more general issue not related to async or threading.

15-20 thousand futures would be trivial. 15-20 thousand goroutines, definitely not.

show 2 replies