logoalt Hacker News

lucb1etoday at 1:32 PM8 repliesview on HN

AWS has a similar RAM consumption. I close Signal to make sure it doesn't crash and corrupt the message history when I need to open more than one browser tab with AWS in the work VM. I think after you click a few pages, one AWS tab was something like 1.4GB (edit: found it in message history, yes it was "20% of 7GB" = 1.4GB precisely)

Does anyone else have the feeling they run into this sort of thing more often of late? Simple pages with just text on it that take gigabytes (AWS), or pages that look simple but it takes your browser everything it has to render it at what looks like 22 fps? (Reddit's new UI and various blogs I've come across.) Or the page runs smoothly but your CPU lifts off while the tab is in the foreground? (e.g. DeepL's translator)

Every time I wonder if they had an LLM try to get some new feature or bugfix to work and it made poor choices performance-wise, but it completes unit tests so the LLM thinks it's done and also visually looks good on their epic developer machines


Replies

r_leetoday at 1:37 PM

I think a big problem is the fact that many web frameworks allow you to write these kind of complex apps that just "work" but performance is often not included in the equation

so it looks fine during basic testing but it scales really bad.

like for example claude/openAI web UIs, they at first would literally lag so bad because they'd just use simple updating mechanisms which would re-render the entire conversation history every time the new response text was updated

and with those console UIs, one thing that might be happening is that it's basically multiple webapps layered (per team/component/product) and they all load the same stuff multiple times etc...

show 2 replies
christophilustoday at 5:58 PM

I was researching laptops at BestBuy and every page took ages to load, was choppy when scrolling, caused my iPhone 13 mini to get uncomfortably hot in my hand and drained my battery fast. It wouldn’t be noticeably different if they were crypto-mining on my iPhone as I browsed their inventory.

It’s astonishing how bad the experience was.

show 1 reply
RunSettoday at 2:58 PM

> Does anyone else have the feeling they run into this sort of thing more often of late? Simple pages with just text on it that take gigabytes (AWS), or pages that look simple but it takes your browser everything it has to render it at what looks like 22 fps?

It is to do with websites essentially baking in their own browser written in javascript to track as much user behavior as possible.

show 1 reply
maccardtoday at 1:45 PM

My company started using slack in 2015 and at that time I put in a bug report to slack that their desktop app was using more memory than my IDE on a 1M+LOC C++ project. I used to stop slack to compile…

show 2 replies
m132today at 1:59 PM

I noticed that there's a developing trend of "who manages to use the most CSS filters" among web developers, and it was there even before LLMs. Now that most of the web is slop in one form or another, and LLMs seem to have been trained on the worst of the worst, every other website uses an obscene amount of CSS backdrop-filter blur, which slows down software renderers and systems with older GPUs to a crawl.

When it comes to DeepL specifically, I once opened their main page and left my laptop for an hour, only to come back to it being steaming hot. Turns out there's a video around the bottom of the page (the "DeepL AI Labs" section) that got stuck in a SEEKING state, repeatedly triggering a pile of NextJS/React crap which would seek the video back, causing the SEEKING event and thus itself to be triggered again.

I wish Google would add client-side resource use to Web Vitals and start demoting poorly performing pages. I'm afraid this isn't going to change otherwise; with first complaints dating back to mid-2010s, browsers and Electron apps hogging RAM are far from new and yet web developers have only been getting increasingly disconnected from reality.

susupro1today at 1:53 PM

Hit this exact wall with desktop wrappers. I was shipping an 800MB Electron binary just to orchestrate a local video processing pipeline.

Moved the backend to Tauri v2 and decoupled heavy dependencies (like ffmpeg) so they hydrate via Rust at launch. The macOS payload dropped to 30MB, and idle RAM settled under 80MB.

Skipping the default Chromium bundle saves an absurd amount of overhead.

IG_Semmelweisstoday at 1:38 PM

Yes, its sometimes extreme. I often wondered if it was my FF browser, but then i'd switch to Opera or Brave, and i would see the same pattern.

Its quite insane

inarostoday at 1:50 PM

What us this AWS you talk about? :-)

show 1 reply