logoalt Hacker News

jxmesthyesterday at 6:09 PM8 repliesview on HN

The only reason I'm stuck with Claude and Chatgpt is because of their tool calling. They do have some pretty useful features like skills etc. I've tried using qwen and deepseek but they can't even output documents. How are you guys handling documents and excels with these tools? I'd love to switch tbh.


Replies

embedding-shapeyesterday at 6:22 PM

> I've tried using qwen and deepseek but they can't even output documents

What agent harness did you use? Usually, "write_file", "shell_exec" or similar is two of the first tools you add to an agent harness, after read_file/list_files. If it doesn't have those tools, unsure if you could even call it a agent harness in the first place.

show 2 replies
ecocentrikyesterday at 6:17 PM

When was the last time you used Qwen models? Their 3.5 and 3.6 models are excellent with tool calling.

show 1 reply
sscaryterryyesterday at 7:06 PM

You can use GLM-5.1 with claude code directly, I use ccs, GLM-5.1 setup as plan, but goes via API key.

NobleLieyesterday at 10:52 PM

Yep Claude Code CLI does A LOT (which is now confirmed even more)

ycui1986today at 12:27 AM

qwen3.5 and qwen3.6 are both good at tool calling.

zrn900today at 9:44 AM

You can just use Cline in VSCode to get most of the tooling you need - it works with all models. Including Xiaomi's new Mimo with 1m context window and blazing fast speed. It's much cheaper than Claude's biggest plan and with much, much more quota.

jwitthuhnyesterday at 6:17 PM

I've been using qwen-code (the software, not to be confused with Qwen Code the service or Qwen Coder the model) which is a fork of gemini-cli and the tool use with Qwen models at least has been great.

estimator7292yesterday at 7:50 PM

You can use both codex and Claude CLI with local models. I used codex with Gemma4 and it did pretty well. I did get one weird session where the model got confused and couldn't decide which tools actually existed in its inventory, but usually it could use tools just fine.