logoalt Hacker News

krasikrayesterday at 4:49 PM3 repliesview on HN

Fine-tuned Qwen models run surprisingly well on NVIDIA Jetson hardware. We've deployed several 7B variants for edge AI tasks where latency matters more than raw accuracy – think industrial inspection, retail analytics where you can't rely on cloud connectivity. The key is LoRA fine-tuning keeps the model small enough to fit in unified memory while still hitting production-grade inference speeds. Biggest surprise was power efficiency; a Jetson Orin can run continuous inference at under 15W while a cloud round-trip burns way more energy at scale.


Replies

andaiyesterday at 4:52 PM

Very interesting. Could you give examples of industrial tasks where lower accuracy is acceptable?

w10-1yesterday at 9:04 PM

> NVIDIA Jetson hardware ... 15W

7B on 15W could be any of the Orin (TOPS): Nano (40), NX (100), AGX (275)

Curious if you've experimented with a larger model on the Thor (2070)

embedding-shapeyesterday at 5:36 PM

> where latency matters more than raw accuracy – think industrial inspection

Huh? Why would industrial inspection, in particular, benefit from lower latency in exchange for accuracy? Sounds a bit backwards, but maybe I'm missing something obvious.

show 1 reply