Yes but not exactly.
- A lot of people suggesting llama-server's web ui, but that requires you use local AI (llama.cpp), it's persisting content into your browser rather than the server (so you can lose your chats), and it doesn't support much functionality.
- There are some pure-browser chat interfaces that are like llama-server but you can use remote LLMs. This is closer to what you want, but everything is stored in the browser, so backup is harder.
- There's LocalAI, which is like the llama-server option, but more stuff is built in and it persists data to disk. It's flashy and very easy if all you want to do is local AI.
- There's LM Studio, which is another thing like LocalAI, but a desktop app.
- There's OpenWebUI, where it's like LocalAI, except you don't do local inference, you use remote LLMs. It sucks to be honest, just stops working a lot of the time, UX is terrible, lots of weird bugs.
- There's OpenHands, which is more like Codex/Claude Code web UI. You run it locally and connect to remote LLMs. Kinda clunky, limited, poor design. Like most coding agents, it doesn't support all the features you would want, like LocalAI/OpenWebUI do.
- There's OpenCode's web UI, which is like OpenHands, but less crappy.
- There's Jan, which is probably what you want. It's a desktop app rather than a web UI.
I started using https://github.com/milisp/codexia/ (which is a desktop app or a web server) that wraps your regular codex-cli or Claude Code CLI. So you can see Codex/Claude threads in your web UI and access it remotely. I love it because you can do Web UI or terminal and all conversations are preserved.
Unfortunately it is pretty buggy, so I am maintaining a fork matching my personal needs with bugfixes and a few extra features.