logoalt Hacker News

Horosyesterday at 9:39 PM0 repliesview on HN

this makes sense for your workload, but may the right primitive be a function of your payload profile and business constraints ?

in my case the problem doesn't arise because control plane and data plane are separated by design — metadata and signals never share a concurrency primitive with chunk writes. the data plane only sees chunks of similar order of magnitude, so a fixed worker pool doesn't overprovision on small payloads or stall on large ones.

curious whether your control and data plane are mixed on the same path, or whether the variance is purely in the blob sizes themselves.

if it's the latter: I wonder if batching sub-1MB payloads upstream would have given you the same result without changing the concurrency primitive. did you have constraints that made that impractical?