logoalt Hacker News

Qwen3.5: Towards Native Multimodal Agents

344 pointsby danielhanchentoday at 9:32 AM163 commentsview on HN

Comments

dash2today at 1:11 PM

You'll be pleased to know that it chooses "drive the car to the wash" on today's latest embarrassing LLM question.

show 6 replies
danielhanchentoday at 9:40 AM

For those interested, made some MXFP4 GGUFs at https://huggingface.co/unsloth/Qwen3.5-397B-A17B-GGUF and a guide to run them: https://unsloth.ai/docs/models/qwen3.5

show 1 reply
simonwtoday at 12:58 PM

Pelican is OK, not a good bicycle: https://gist.github.com/simonw/67c754bbc0bc609a6caedee16fef8...

show 7 replies
tarrudatoday at 1:18 PM

Would love to see a Qwen 3.5 release in the range of 80-110B which would be perfect for 128GB devices. While Qwen3-Next is 80b, it unfortunately doesn't have a vision encoder.

show 2 replies
gunalxtoday at 12:59 PM

Sad to not see smaller distills of this model being released alongside the flaggship. That has historically been why i liked qwen releases. (Lots of diffrent sizes to pick from from day one)

show 4 replies
bertilitoday at 12:36 PM

Last Chinese new year we would not have predicted a Sonnet 4.5 level model that runs local and fast on a 2026 M5 Max MacBook Pro, but it's now a real possibility.

show 5 replies
vessenestoday at 2:41 PM

Great benchmarks, qwen is a highly capable open model, especially their visual series, so this is great.

Interesting rabbit hole for me - its AI report mentions Fennec (Sonnet 5) releasing Feb 4 -- I was like "No, I don't think so", then I did a lot of googling and learned that this is a common misperception amongst AI-driven news tools. Looks like there was a leak, rumors, a planned(?) launch date, and .. it all adds up to a confident launch summary.

What's interesting about this is I'd missed all the rumors, so we had a sort of useful hallucination. Notable.

show 1 reply
azinman2today at 4:50 PM

Does anyone else have trouble loading from the qwen blogs? I always get their placeholders for loading and nothing ever comes in. I don’t know if this is ad blocker related or what… (I’ve even disabled it but it still won’t load)

show 1 reply
myntitoday at 11:27 AM

Does anyone know what kind of RL environments they are talking about? They mention they used 15k environments. I can think of a couple hundred maybe that make sense to me, but what is filling that large number?

show 2 replies
fdefittetoday at 8:15 PM

The "native multimodal agents" framing is interesting. Everyone's focused on benchmark numbers but the real question is whether these models can actually hold context across multi-step tool use without losing the plot. That's where most open models still fall apart imo.

rangunatoday at 3:23 PM

Already on open router, prices seem quite nice.

https://openrouter.ai/qwen/qwen3.5-plus-02-15

ggcrtoday at 9:52 AM

From the HuggingFace model card [1] they state:

> "In particular, Qwen3.5-Plus is the hosted version corresponding to Qwen3.5-397B-A17B with more production features, e.g., 1M context length by default, official built-in tools, and adaptive tool use."

Anyone knows more about this? The OSS version seems to have has 262144 context len, I guess for the 1M they'll ask u to use yarn?

[1] https://huggingface.co/Qwen/Qwen3.5-397B-A17B

show 2 replies
Alifatisktoday at 12:42 PM

Wow, the Qwen team is pushing out content (models + research + blogpost) at an incredible rate! Looks like omni-modals is their focus? The benchmark look intriguing but I can’t stop thinking of the hn comments about Qwen being known for benchmaxing.

sasidhar92today at 4:34 PM

Going by the pace, I am more bullish that the capabilities of opus 4.6 or latest gpt will be available under 24GB Mac

show 1 reply
codingbeartoday at 6:45 PM

Do they mention the hardware used for training? Last I heard there was a push to use Chinese silicon. No idea how ready it is for use

Matltoday at 1:58 PM

Is it just me or are the 'open source' models increasingly impractical to run on anything other than massive cloud infra at which point you may as well go with the frontier models from Google, Anthropic, OpenAI etc.?

show 2 replies
benbojanglestoday at 6:43 PM

Was using Ollama but qwen3.5 unavailable earlier today

XCSmetoday at 5:08 PM

I just started creating my own benchmarks (very simple questions for humans but tricky for AI, like how many r's in strawberry kind of questions, still WIP).

Qwen3.5 is doing ok on my limited tests: https://aibenchy.com

trebligdivadtoday at 12:50 PM

Anyone else getting an automatically downloaded PDF 'ai report' when clicking on this link? It's damn annoying!

collinwilkinstoday at 4:54 PM

at this point it seems every new model scores within a few points of each other on SWE-bench. the actual differentiator is how well it handles multi-step tool use without losing the plot halfway through and how well it works with an existing stack

XCSmetoday at 3:28 PM

Let's see what Grok 4.20 looks like, not open-weight, but so far one of the high-end models at real good rates.

isusmeljtoday at 12:32 PM

Is it just me or is the page barely readable? Lots of text is light grey on white background. I might have "dark" mode on on Chrome + MacOS.

show 5 replies
ddtaylortoday at 12:54 PM

Does anyone know the SWE bench scores?

show 1 reply
Western0today at 5:30 PM

Who can tell me how creating a sound generate from text localy

lollobombtoday at 1:23 PM

Yes, but does it answer questions about Tiananmen Square?

show 4 replies