logoalt Hacker News

akiselevtoday at 3:46 PM6 repliesview on HN

Shameless plug: https://github.com/akiselev/ghidra-cli

I’ve been using Ghidra to reverse engineer Altium’s file format (at least the Delphi parts) and it’s insane how effective it is. Models are not quite good enough to write an entire parser from scratch but before LLMs I would have never even attempted the reverse engineering.

I definitely would not depend on it for security audits but the latest models are more than good enough to reverse engineer file formats.


Replies

bitexplodertoday at 4:41 PM

I can tell you how I am seeing agents be used with reasonable results. I will keep this high level. I don't rely on the agents solely. You build agents that augment your capabilities.

They can make diagrams for you, give you an attack surface mapping, and dig for you while you do more manual work. As you work on an audit you will often find things of interest in a binary or code base that you want to investigate further. LLMs can often blast through a code base or binary finding similar things.

I like to think of it like a swiss army knife of agentic tools to deploy as you work through a problem. They won't balk at some insanely boring task and that can give you a real speed up. The trick is if you fall into the trap of trying to get too much out of an LLM you end up pouring time into your LLM setup and not getting good results, I think that is the LLM productivity trap. But if you have a reasonable subset of "skills" / "agents" you can deploy for various auditing tasks it can absolutely speed you up some.

Also, when you have scale problems, just throw an LLM at it. Even low quality results are a good sniff test. Some of the time I just throw an LLM at a code review thing for a codebase I came across and let it work. I also love asking it to make me architecture diagrams.

show 1 reply
jakozaurtoday at 4:11 PM

Oh, nice find... We end up using PyGhidra, but the models waste some cycles because of bad ergonomics. Perhaps your cli would be easier.

Still, Ghidra's most painful limitation was extremely slow time with Go Lang. We had to exclude that example from the benchmark.

selridgetoday at 5:53 PM

This is really cool! Thanks for sharing. It's a lot more sophisticated than what I did w/ Ghidra + LLMs.

staredtoday at 6:57 PM

Thanks for sharing! It seems to be an active space, vide a recent MCP server (https://news.ycombinator.com/item?id=46882389). I you haven't tried, recommend a lot posting it as Show HN.

I tried a few approaches - https://github.com/jtang613/GhidrAssistMCP (was the harderst to set) Ghidra analyzeHeadless (GPT-5.2-Codex worked with it well!) and PyGhidra (my go-to). Did you try to see which works the best?

I mean, very likely (especially with an explicit README for AI, https://github.com/akiselev/ghidra-cli/blob/master/.claude/s...) your approach might be more convenient to use with AI agents.

limatoday at 4:12 PM

How does this approach compare to the various Ghidra MCP servers?

show 2 replies
huflungdungtoday at 4:11 PM

[dead]