logoalt Hacker News

What async promised and what it delivered

210 pointsby zdwlast Wednesday at 5:28 AM234 commentsview on HN

Comments

kibwenyesterday at 8:48 PM

> Language designers who studied the async/await experience in other ecosystems concluded that the costs of function coloring outweigh the benefits and chose different paths.

Not really. The author provides Go as evidence, but Go's CSP-based approach far predates the popularity of async/await. Meanwhile, Zig's approach still has function coloring, it's just that one color is "I/O function" and the other is "non-I/O function". And this isn't a problem! Function coloring is fine in many contexts, especially in languages that seek to give the user low-level control! I feel like I'm taking crazy pills every time people harp about function coloring as though it were something deplorable. It's just a bad way of talking about effect systems, which are extremely useful. And sure, if you want to have a high-level managed language like Go with an intrusive runtime, then you can build an abstraction that dynamically papers over the difference at some runtime cost (this is probably the uniformly correct choice for high-level languages, like dynamic or scripting languages (although it must be said that Go's approach to concurrency in general leaves much to be desired (I'm begging people to learn about structured concurrency))).

show 8 replies
shortercodeyesterday at 10:15 PM

Having lived through the changes from callback hell, early promises and then async/await I only ever found each step an improvement and the negatives are very minor when actually working with them.

Now function colouring is interesting but not for the reason these articles get excited. Recolouring is easy and has basically no impact on code maintenance. BUT if you need that code path to really fly then marking it as async is a killer, as all those tiny little promises add tiny delays in the form of many tasks. Which add up to performance problems on hot code paths. This is particularly frustrating if functions are sometimes async, like lazy loaders or similar cache things. To get around this you can either use callbacks instead or use selective promise chaining to only use promises when you get a promise. Both strategies can be messy and trip up people who don’t understand these careful design decisions.

One other fun thing is indexeddb plays terribly with promises, as it uses a “transactions close at end of task” mechanism, making certain common patterns impossible with promises due to how they behave with the task system. Although some API designers have come up with ways around this to give you promise interfaces for databases. Normally by using callbacks internally and only doing one operation per transaction.

show 3 replies
dcanyesterday at 7:15 PM

I will agree - async rust on an operating system isn’t all that impressive - it’s a lot easier to just have well defined tasks and manually spawn threads to do the work.

However, in embedded rust async functions are amazing! Combine it with a scheduler like rtic or embassy, and now hardware abstractions are completely taken care of. Serial port? Just two layers of abstraction and you have a DMA system that shoves bytes out UART as fast as you can create them. And your terminal thread will only occupy as much time as it needs to generate the bytes and spit them out, no spin locking or waiting for a status register to report ready.

show 1 reply
SebastianKrayesterday at 7:00 PM

The discussion around async await always focuses on asynchronous use-cases, but I see the biggest benefits when writing synchronous code. In JS, not having await in front of a statement means that nothing will interfere with your computation. This simplifies access to shared state without race conditions.

The other advantage is a rough classification in the type system. Not marking a function as async means that the author believes it can be run in a reasonable amount of time and is safe to run eg. on a UI main thread. In that sense, the propagation through the call hierarchy is a feature, not a bug.

I can see that maintaining multiple versions of a function is annoying for library authors, but on the other hand, functions like fs.readSync shouldn’t even exist. Other code could be running on this thread, so it's not acceptable to just freeze it arbitrarily.

show 2 replies
spacechild1today at 10:47 AM

Unfortunately, asynchronous programming is almost always discussed in terms of network I/O. However, there are many more use cases for concurrency. Coroutines can be extremely useful for modelling state machines or any kind of process that happens over time. Everytime you need to do X, then wait for N (milli)seconds, then do Y, etc., coroutines provide a very ergonomic solution. If your language supports stackful coroutines (e.g. Lua or Ruby), you don't even need to color your functions: you can just write regular functions that yield back to the scheduler anywhere in the call stack.

To give a concrete example: computer music languages, such as SuperCollider, need concurrency to implement musical scheduling. Imagine a musical sequence where you play a note, wait N beats, play another note, etc. Often you want to play many such sequences simultaneously. Stackful coroutines provide a very elegant solution to this problem. Every independent musical sequence can be modelled by a coroutine that yields everytime it needs to wait. The yielded value is interpreted by the scheduler as a delta time after which the function should be resumed. In this sense, SuperCollider users have been doing async programming since the early 2000s, long before it became mainstream.

jemfinchtoday at 3:01 AM

> OS threads are expensive: an operating system thread typically reserves a megabyte of stack space and takes roughly a millisecond to create.

It's typically less than a hundred kilobytes and (on the systems I've benchmarked using std::thread) it takes 60usec (wall time in userspace) to create and destroy a thread.

Threads have gotten so fast that paying the async function coloring price makes very little sense for most software.

show 1 reply
ibraheemdevyesterday at 8:24 PM

> OS threads are expensive: an operating system thread typically reserves a megabyte of stack space

Why is reserving a megabyte of stack space "expensive"?

> and takes roughly a millisecond to create

I'm not sure where this number is from, it seems off by a few orders of magnitude. On Linux, thread creation is closer to 10 microseconds.

show 7 replies
mbidyesterday at 6:32 PM

How many systems are there that can't just spawn a thread for each task they have to work on concurrently? This has to be a system that is A) CPU or memory bound (since async doesn't make disk or network IO faster) and B) must work on ~tens of thousands of tasks concurrently, i.e. can't just queue up tasks and work on only a small number concurrently. The only meaningful example I can come up with are load balancers, embedded software and perhaps something like browsers. But e.g. an application server implementing a REST API that needs to talk to a database anyway to answer each request doesn't really qualify, since the database connection and the work the database itself does are likely much more resource intensive than the overhead of a thread.

show 5 replies
joelwilliamsonlast Wednesday at 10:06 AM

Function colouring, deadlocks, silent exception swallowing, &c aren’t introduced by the higher levels, they are present in the earlier techniques too.

show 2 replies
foreman_today at 5:56 AM

The thread treats async/await as one design pattern. It hasn’t aged the same way across languages.

C# async/await on top of the TPL is doing different work from JavaScript’s promise model. The C# version composes with cancellation tokens, structured exception handling, and a real thread pool underneath. JavaScript coloured the language because it had to: single-threaded runtime, no alternative. Rust’s async is closer to C++’s coroutines: a state machine at the call site, no runtime by default, executors as libraries. Three different things wearing the same syntax.

C++ shipped without async for thirty years and got along on thread pools and condition variables. The async crowd is right that thread-per-connection breaks at c10k. The thread-per-connection crowd is right that almost nobody is at c10k. Both are right about different problems.

The question that matters isn’t whether async was a mistake. It’s whether each language imported the cost (function colouring, runtime overhead, debugger pain) for the workload it actually has. JavaScript had to. C# had a strong case. Rust’s case for embedded is excellent. Python’s case is the most contested of the four and the one that gets defended hardest.

rstuart4133last Wednesday at 11:26 PM

Async is a Javascript hack that inexplicably got ported to other languages that didn't need it.

The issue arose because Javascript didn't have threads, and processing events from the DOM is naturally event driven. To be fair, it's a rare person who can deal with the concurrency issues threads introduce, but the separate stacks threads provide a huge boon. They allow you to turn event driven code into sequential code.

    window.on_keydown(foo);

    // Somewhere far away
    function foo(char_event) { process_the_character(char_event.key_pressed) };
becomes:

    while (char = read())
        process_the_character(char);
The latter is easy to read linear sequence of code that keeps all the concerns in one place, the former rapidly becomes a huge entangled mess of event processing functions.

The history of Javascript described in the article is just a series of attempts to replace the horror of event driven code with something that looks like the sequential code found in a normal program. At any step in that sequence, the language could have introduced green threads and the job would have been done. And it would have been done without new syntax and without function colouring. But if you keep refining the original hacks they were using in the early days and don't the somewhat drastic stop of introducing a new concept to solve the problem (separate stacks), you end up where they did - at async and await. Mind you, async and await to create a separate stack of sorts - but it's implemented as a chain objects malloc'ed on the heap instead the much more efficient stack structure.

I can see how the javascript community fell into that trap - it's the boiling frog scenario. But Python? Python already had threads - and had the examples of Go and Erlang to show how well then worked compared to async / await. And as for Rust - that's beyond inexplicable. Rust has green threads in the early days and abandoned them in favour of async / await. Granted the original green thread implementation needed a bit of refinement - making every low level choose between event driven and blocking on every invocation was a mistake. Rust now has a green thread implementation that fixes that mistake, which demonstrates it wasn't that hard to do. Yet they didn't do it at the time.

It sounds like Zig with its pluggable I/O interface finally got it right - they injected I/O as a dependency injected at compile time. No "coloured" async keywords and compiler monomorphises the right code. Every library using I/O only has to be written once - what a novel concept! It's a pity it didn't happen in Rust.

show 8 replies
mkjtoday at 12:48 AM

> Tokio’s dominance is function coloring at ecosystem scale

That isn't function colouring, but rather plain incompatible APIs/runtime. You could have the equivalent with non-async ecosystems.

show 1 reply
andrewstuartlast Wednesday at 9:33 AM

I like async and await.

I understand that some devs don’t want to learn async programming. It’s unintuitive and hard to learn.

On the other hand I feel like saying “go bloody learn async, it’s awesome and massively rewarding”.

show 7 replies
jayd16today at 12:32 AM

They get their sequential trap example wrong.

You can call async methods without immediately calling await. You can naively await as late as possible. They'll run in parallel, or at least how ever the call was configured.

show 1 reply
Waterluviantoday at 1:39 AM

I’m not really smart on this subject but I started during callback hell and now use async in Node and front-end and I find it to be just superb. Sometimes I have to reason about queued tasks vs. micro tasks and all that but most of the time it just does what I expect and keeps the code very clean.

time4teayesterday at 7:29 PM

No mention of JVM.. which is a bit odd as recently is kinda solved this problem. Sure, not all use cases, but a lot.

It uses N:M threading model - where N virtual threads are mapped to M system threads and its all hidden away from you.

All the other languages just leak their abstractions to you, java quietly doesn't.

Sure, java is kinda ugly language, you can use a different JVM language, all good.

Don't get me wrong, love python, rust, dart etc, but JVM is nice for this.

show 1 reply
cdaringelast Wednesday at 5:41 AM

Surely by section 7 well be talking (or have talked) about effect systems

show 1 reply
oconnor663yesterday at 9:38 PM

> async/await introduced entirely new categories of bugs that threads don’t have. O’Connor documents a class of async Rust deadlocks he calls “futurelocks”

I didn't coin that term, the Oxide folks did: https://rfd.shared.oxide.computer/rfd/0609. I want to emphasize that I don't think futurelocks represent a "fundamental mistake" or anything like that in Rust's async model. Instead, I believe they can be fixed reliably with a combination of some new lint rules and some replacement helper functions and macros that play nicely with the lints. The one part of async Rust that I think will need somewhat painful changes is Stream/AsyncIterator (https://github.com/rust-lang/rust/issues/79024#issuecomment-...), but those aren't yet stable, so hopefully some transition pain is tolerable there.

> The pattern scales poorly beyond small examples. In a real application with dozens of async calls, determining which operations are independent and can be parallelized requires the programmer to manually analyze dependencies and restructure the code accordingly.

I think Rust is in an interesting position here. On the one hand, running things concurrently absolutely does take deliberate effort on the programmer's part. (As it does with threads or goroutines.) But on the other hand, we have the borrow checker and its strict aliasing rules watching our back when we do choose to put in that effort. Writing any sort of Rust program comes with cognitive overhead to keep the aliasing and mutation details straight. But since we pay that overhead either way (for better or worse), the additional complexity of making things parallel or concurrent is actually a lot less.

> At the function level, adding a single i/o call to a previously synchronous function changes its signature, its return type, and its calling convention. Every caller must be updated, and their callers must be updated.

This is part of the original function coloring story in JS ("you can only call a red function from within another red function") that I think gets over-applied to other languages. You absolutely can call an async function from a regular function in Rust, by spinning up a runtime and using `block_on` or similar. You can also call a regular function from an async function by using `spawn_blocking` or similar. It's not wonderful style to cross back and forth across that boundary all the time, and it's not free either. (Tokio can also get mad at you if you nest runtimes within one another on the same thread.) But in general you don't need to refactor your whole codebase the first time you run into a mismatch here.

miiiiiiketoday at 3:04 AM

JavaScript developers don't like hearing this but RxJS solves, or gives you the tools to solve, most of these problems.

paulddraperlast Wednesday at 10:38 AM

> This was bad enough that Node.js eventually changed unhandled rejections from a warning to a process crash, and browsers added unhandledrejection events. A feature designed to improve error handling managed to create an entirely new class of silent failures that didn’t exist with callbacks.

Java has this too.

nrdslast Wednesday at 4:53 PM

Zig is just doing vtable-based effect programming. This is the way to go for far more than async, but it also needs aggressive compiler optimization to avoid actual runtime dispatch.

show 2 replies
mirekrusinyesterday at 8:24 PM

No mention of ruby which is colorless.

show 1 reply
fl0kiyesterday at 6:37 PM

Async ruined Rust for me, even though I write exactly the kind of highly concurrent servers to which it's supposed to be perfectly suited. It degrades API surfaces to the worst case :Send+Sync+'static because APIs have to be prepared to run on multithreaded executors, and this infects your other Rust types and APIs because each of these async edges is effectively a black hole for the borrow checker.

Don't get me started on how you need to move "blocking" work to separate thread pools, including any work that has the potential to take some CPU time, not even necessarily IO. I get it, but it's another significant papercut, and your tail latency can be destroyed if you missed even one CPU-bound algorithm.

These may have been the right choices for Rust specifically, but they impair quality of life way too much in the course of normal work. A few years ago, I had hope this would all trend down, but instead it seems to have asymptoted to a miserable plateau.

show 5 replies
kmeisthaxtoday at 4:42 AM

> This is a promise (JavaScript) or future (Java, Rust, etc). The concept dates to Baker and Hewitt in 1977, but it took the C10K pressure of the 2010s to push it into mainstream programming.

Almost. JavaScript adopted async because it was a programming language designed to slot into someone else's event loop. Other programming languages, at least on the server, that needed lightweight threading didn't bother with any of this, they just shipped their own managed stacks. But UI code practically demands to own its own event loop and requires everything else live as callbacks inside of it. And JavaScript, because it was designed to live in a browser, inherited these same semantics.

wesselbindtlast Wednesday at 5:25 PM

I would really hate to work with a blue/red function system. I would have to label all my functions and get nothing in return. But, labelling my functions with some useful information that I care about, that can tell me interesting things about the function without me having to read the function itself and all the functions that it calls, I'd consider a win. I happen to care about whether my functions do IO or not, so the async label has been nothing short of a blessing.

shmerltoday at 3:48 AM

So what is the next step in solving it that's better previous ones?

pyinstallwoestoday at 1:41 AM

Erlang is a beautiful example of not having to deal with function coloring/creep. Any other language?

twoodfinyesterday at 10:47 PM

How did this article get back on the front page with all its comments time-shifted?

My trite slop bashing was days ago:

https://news.ycombinator.com/item?id=47862726

jen20today at 1:08 AM

It seems unfair to spend so much time in this article talking about JavaScript and Java without mentioning that async/await first appeared in .NET, and _broadly speaking_ works pretty well there.

coolThingsFirstyesterday at 11:29 PM

Promised delivered exactly what it it.

threethirtytwoyesterday at 9:12 PM

This was a hardware and os level problem first. All of that had to be solved before higher level abstractions through languages like go JavaScript could tackle it. Author skipped this entirely.

FpUsertoday at 3:29 AM

in real life when request handler call async/colored/whatnot it lets the call proceed and immediately ready to process next request. The backend then would have no problems to create ever growing number of asyncs currently in flight. In real life those asyncs would most likely end up calling database. The end result is that backend would simply overwhelm the database and other resources that have to maintain states of those countless asyncs in flight.

This whole thing is basically snake oil. The best thing backend can do instead is have dedicated thread pool where each real thread has its own queue of limited size. Each element in queue would contain input and output state of request and code to deal with those. Once queue grows over certain size the backend should simply immediately return error code (too busy). Much more sound strategy in my opinion.

There are more complex cases of course (like computationally expensive requests with no io that take long time). Handling those would require some extra logic. Async stuff however will not help here either

woriktoday at 12:02 AM

I would point out two other short Cummings in the async/await paradigm

1. It makes asynchronous programming look synchronous. I do not like things being other than they appear. The point was touched on with the:

        getOrders(user.id),
        getRecommendations(user.id)
example, but it is a serious thing when the v mental model is wrong

2. On a related issue CPU bound code can block the thread of execution and stop any concurrency in its tracks

In Rust there is the added problem of shoehorning it into the memory model which has lead to a lot of hairy code and tortured paradigms (e.g. pin)

worikyesterday at 11:55 PM

There is a small "straw man" bias here. Callbacks are not the only alternative to Promises. There exist state machines and event loops too.

I play around with real time audio, and use state machine/event loop. A very powerful, if verbose, method to do real-time programming, I cannot see how asyc/await could achieve the same ends

teaearlgraycoldyesterday at 8:43 PM

Not a fan of async in other languages (I avoid it in rust and python like the plague), but it feels like a straight upgrade in JS. I’ve never once regretted its addition. In my experience it’s extremely rare for things to get more complicated than an await followed by a Promise.all(). Unhandled rejections are super obvious to a human as performing a .then() chain is uncommon in the days of await. And linters will pick it up if you miss it. Function coloring isn’t an issue as all of the Node stdlib that I’ve seen provides async functionality (back in the day you could accidentally call a synchronous file system operation and break the event loop). You end up with everything returning a promise except for some business logic at the leafs of the dependency graph. A Node app is mostly i/o anyway, thus the functions mostly return Promises. The await keyword is homomorphic across promises and other values. And type checking (who isn’t using typescript?) will catch most API changes where something becomes async. I can’t say it’s perfect, but it’s really not a problem for me.

jdw64yesterday at 9:05 PM

[dead]

edmondxlast Wednesday at 4:57 PM

[dead]

holybbbbyesterday at 8:31 PM

No mention of Novell Netware. This was a solved problem decades ago and Windows had it for almost as long.

The next decade will be a proliferation of hackers having fun with io_uring coming up with all sorts of patterns.

show 1 reply
littlestymaarlast Wednesday at 2:40 PM

Because all HN needed was another piece of AI slop incorrectly quoting “what color is your function”…

It's 2026 and I'm starting to hate the internet.

show 1 reply
bironranyesterday at 7:01 PM

It’s a slop alright. But it also missed the next mainstream iteration which is Java virtual threads / Goroutines. Those do away with coloring by attacking the root of the problem: that OS threads are expensive.

Sure, it comes with its own issues like large stacks (10k copy of nearly the same stack?) and I predict a memory coloring in the future (stack variables or even whole frames that can opt out from being copied across virtual threads).