logoalt Hacker News

grim_ioyesterday at 9:46 PM1 replyview on HN

What do you mean it's on ollama and requires h100? As a proprietary google model, it runs on their own hardware, not nvidia.


Replies

KaiserProyesterday at 10:03 PM

sorry A lack of context:

https://ollama.com/library/gemini-3-pro-preview

You can run it on your own infra. Anthropic and openAI are running off nvidia, so are meta(well supposedly they had custom silicon, I'm not sure if its capable of running big models) and mistral.

however if google really are running their own inference hardware, then that means the cost is different (developing silicon is not cheap...) as you say.

show 2 replies