logoalt Hacker News

tmnvix05/03/20253 repliesview on HN

I've been curious for a while now. One thing that gives me pause though is how Phoenix LiveView apps perform when you're dealing with high latency. I'm aware that many apps will be serving primarily the US market and so might not recognise this as much of an issue. I'm also aware that I could deploy 'at the edge' with something like fly.io. Still, when I run a ping test to 100 different locations around the world from NZ, the majority of results are 300ms+. That seems like it would have a pretty noticeable impact on a user's experience.

TLDR; Are most Phoenix deployments focused on a local market or deployed 'at the edge' or are people ignoring the potentially janky experience for far-flung users?


Replies

cultofmetatron05/03/2025

while its true that phoenix liveview's default is to have all state be on the server, there are hooks to run javascript behavior on the frontend for things like optimistic updates and transitions. This gives plenty of ways to make the frontend feel responsive even when the roundtrip is 300+.

qq9905/04/2025

That's a really good point.

I haven't done a lot of optimistic updates with LiveView yet. I'm not sure how sanely you could really achieve it (because it seems you'd lose the primary benefit: server-side rendering / source of truth).

However, there are a few mechanisms you can use to note that the page is loading / processing a LV event that can assist the user in understanding the latency. e.g., at the very least, make a button spindicate. I've experienced (in my own apps) the "huh is the app dead?" factor with latency, which suggests I need to simulate latency more. If the socket is unstable or cannot connect, the app is just entirely dead, though the fallback to longpolling is satisfactory.

I think it would really shine for internal apps due to the sheer velocity and simplicity of developing and deploying it.

In the worst case, you could fall back to using regular controllers or APIs controllers, so I still see it being a "better version of Ruby" overall. However, if we're going back to this, I would rather use SolidStart and do it all in TypeScript anyway.

At the end of the day, I'm very torn between the resilience/ease/speed of Elixir and the much better type system in TS. The ability to just async something and know it will work is kind of crazy for improving performance of apps (check out assign_async)

> the majority of results are 300ms+

Another thing to consider is that a lot of apps (SPA powered by API) take 300~1000ms to even give you a JSON response these days. So if you can get by with making a button spin while you await the liveview response (or are content with topbar.js) I think you can get roughly close to the same experience.

> deployed 'at the edge'

The nice part of Elixir is you could probably make a global cluster quite easily. I've never done it though. You could have app nodes close to users. I think you'd have to think of a way to accelerate your DB connection however (which probably lives in 1 zone).

mike1o105/04/2025

Yes, unfortunately that is the big weakness of LiveView. It also suffers from what I call the elevator problem, where LiveView apps are unusable with unstable connections and flat out stop working in an elevator or areas with spotty connections.

However, Elixir and Phoenix is more than just LiveView! There’s also an Inertia plugin for Phoenix, and Ecto as an “ORM” is fantastic.