I like the ubiquitous type inference. It reminds me a bit of ELSA for Emacs Lisp: https://github.com/emacs-elsa/Elsa. In particular, type aware macros have been on my wishlist forever: there's no good reason I shouldn't be able to write, e.g. an elisp or CL/SBCL compiler-macro that specializes an operation based on its inferred type. In normal lisps, it's hard to get even the declared types.
That said, I wish that part of Loon were less coupled to the allocation model though. What made you opt for mandatory manual memory management in an otherwise high-level language? And effects?
There are two things common in language design that, honestly, strike me as unnecessary:
1. manual allocation and lifetime stacking, and
2. algebraic effects.
On 1: I think we often conflate the benefits of Rust-style mutability-xor-aliased reference discipline with the benefits of using literal malloc and free. You can achieve the former without necessitating the latter, and I think it leads to a nicer language experience.
It's not just true that GC "comes with latency spikes, higher memory usage, and unpredictable pauses" in any meaningful way with modern implementations of the concept. If anything, it leads to more consistent latency (no synchronous Drop of huge trees at unpredictable times) and better memory use (because good GCs use compressed pointers and compaction).
On 2: I get non-algebraic effects for delimited continuations. But lately I've seen people using non-flow-magical effects for everything. If you need to talk to a database, pick a database interface and pass an object implementing the interface to the code that needs it. Effects do basically the same thing, but implicitly.
I always saw algebraic effects as a more-ergonomic alternative to functor/applicative/monad for managing I/O and otherwise impure code. If you aren't particularly concerned with that level of purity then yeah it's "just" an indirect way to write an interface.