logoalt Hacker News

Havocyesterday at 1:31 PM0 repliesview on HN

vllm-metal isn't GPU access but rather a openai compatible end point which I can already do via lm studio endpoint over network

>podman libkrun

Haven't tried it but research suggests its really shaky still. podman libkrun exposes vulkan while torch expects mps on macs. Sounds like one can force vulkan but that's apparently slow and beta-ish?