logoalt Hacker News

tjoffyesterday at 10:36 AM4 repliesview on HN

Do we even have decent OCR nowadays? Any free solutions?


Replies

Farmadupeyesterday at 11:04 AM

The latest rounds of open weights vision language models are incredibly good. Like, massively good. Open weights vision capabilities trade blows with frontier models. Over the last few months I'd roughly rank capabilities as Gemini -> {chatgpt and SoTa open weights models} -> Claude.

qwen3.5-2b and qwen3.5-4b are great at document parsing. They can run on CPU

qwen3.6-27b and gemma4-31b are borderline better than the human eye in some cases. Their OCR isn't perfect, but they're seriously good. They can still run on the CPU but you'll be waiting minutes per document.

You can demand JSON, YAML, MD, or freeform text just by varying the prompt. Even if you have a custom template, you can just put that in the prompt and they'll do an OK-ish job.

There's also models that aren't in the r/locallama zeitgeist. IBM released a new 4b parameter model for structured text extraction last week, and there's a sea of recent chinese OCR models too.

IMO the open wights models are so good that in a lot of cases it's not worth paying frontier labs for OCR purposes. The only barrier to entry is the effort to set up a pipeline, and havin the spare CPU/GPU capacity.

adrian_byesterday at 11:15 AM

Many of the open-weights LLMs accept either text or images as input.

Besides those, there are a few smaller open-weights models that are dedicated for OCR tasks, for instance DeepSeek-OCR-2 and IBM granite-vision-4.1-4b. (They can be found on huggingface.co)

The dedicated vision models can be run on much cheaper hardware, including smartphones, than the big models that can process images besides text.

Similarly, besides bigger multimodal models, that can accept audio, images or text as imput, there are smaller open-weights models that are dedicated for speech recognition, e.g. Xiaomi MiMo-V2.5-ASR and IBM granite-speech-4.1-2b.

PeterStuertoday at 5:38 AM

Depends on your use case. My procuction runs satisfactory on a local docling-serve ( https://github.com/docling-project/docling-serve ), but that is mostly easy relatively clean scans of decently typeset documents with some typical scanning artefacts.

lrvickyesterday at 10:50 AM

The qwen models not only have good OCR, they will describe pictures to you.

show 1 reply