The stack: two agents on separate boxes. The public one (nullclaw) is a 678 KB Zig binary using ~1 MB RAM, connected to an Ergo IRC server. Visitors talk to it via a gamja web client embedded in my site. The private one (ironclaw) handles email and scheduling, reachable only over Tailscale via Google's A2A protocol.
Tiered inference: Haiku 4.5 for conversation (sub-second, cheap), Sonnet 4.6 for tool use (only when needed). Hard cap at $2/day.
A2A passthrough: the private-side agent borrows the gateway's own inference pipeline, so there's one API key and one billing relationship regardless of who initiated the request.
You can talk to nully at https://georgelarson.me/chat/ or connect with any IRC client to irc.georgelarson.me:6697 (TLS), channel #lobby.
Curious, how did you settle on Haiku/Sonnet? Because there are much cheaper models on OpenRouter that probably perform comparatively...
Consider Haiku 4.5: $1/M input tokens | $5/M output tokens vs MiniMax M2.7: $0.30/M input tokens | $1.20/M output tokens vs Kimi K2.5: $0.45/M input tokens | $2.20/M output tokens
I haven't tried so I can't say for sure, but from personal experience, I think M2.7 and K2.5 can match Haiku and probably exceed it on most tasks, for much cheaper.
IRC as transport is great until you need delivery guarantees. It's at-most-once - agent disconnects, whatever happened in between is gone. For chat that's fine, for an agent processing real work you want at-least-once with dedup. SSE is a nice middle ground. Persistent like IRC, works through any proxy, and you can layer ack/redelivery on top. Agent crashes, reconnects, unacked items show up again.
Super random but I had a similar idea for a bot like this that I vibe coded while on a train from Tokyo to Osaka
https://web-support-claw.oncanine.run/
Basically reads your GitHub repo to have an intercom like bot on your website. Answer questions to visitors so you don’t have to write knowledge bases.
Similar architecture - we run 4 agents (sales, social, finance, strategy) communicating through a shared message board backed by FastAPI + SQLite instead of IRC. Different transport, same pattern: separate agents with distinct roles, tiered inference, crash-recovery for resilience.
The /day hard cap is smart. We built spend caps into the governance layer instead. The rate limit panic in AI coding is really a cost governance problem most people solve at the wrong layer.
IRC as transport is interesting - pub/sub maps well to multi-agent communication. We use HTTP polling + acknowledgment-based dedup, less elegant but handles the case where agents crash and restart frequently (ours recover ~50 times a day during heavy development). The dedup state persistence across crashes was the first thing that broke for us.
For future reference I recommend having another Haiku instance monitor the chat and check if people are up to some shenanigans. You can use ntfy to send yourself an alert. The chat is completely off the rails right now...
I actually use IRC in my coding agent
Change into rooms to get into different prompts.
using it as remote to change any project, continue from anywhere.
> That boundary is deliberate: the public box has no access to private data.
Challenge accepted? It’d be fun to put this to the test by putting a CTF flag on the private box at a location nully isn’t supposed to be able to access. If someone sends you the flag, you owe them 50 bucks :)
I tried it, it was cool. I don't like nully's attitude though. Very dismissive and tough.
But I like your setup as a whole. I'll see if I can get some takeaways from it.
I do tiered here too, with the lowest tier just a qwen local bot.
By the way how do you handle the escalation from haiku to opus I wonder?
This is such a great idea. I have an idea now for a bot that might help make tech hiring less horrible. It would interview a candidate to find out more about them personally/professionally. Then it would go out and find job listings, and rate them based on candidate's choices. Then it could apply to jobs, and send a link to the candidate's profile in the job application, which a company could process with the same bot. In this way, both company and candidate could select for each other based on their personal and professional preferences and criteria. This could be entirely self-hosted open-source on both sides. It's entirely opt-in from the candidate side, but I think everyone would opt-in, because you want the company to have better signal about you than just a resume (I think resumes are a horrible way to find candidates).
I really like the idea, as well as the "terminal" style the site has. however, I consider that an additional daily spend of $2 could be avoided. perhaps by caching common questions (like "what is this?"), or by using free tiers on API providers.
or, maybe I'm just too cost-conscious.
either way, the API limit is currently your "Achilles' heel", as it has already caused the bot to stop responding.
Nice. I had some fun. Good work!
One question. Sonnet for tool use? I am just guessing here that you may have a lot of MCPs to call and for that Sonnet is more reliable. How many MCPs are you running and what kinds?
> Automatic updates: Unattended security upgrades enabled.
Always wondered if such unattended upgrades are not security risk in itself, eg. seeing latest litellm compromise.
Cool approach using IRC as transport. I've been experimenting with MCP as the control plane for letting AI agents manage infrastructure specifically database operations. The lightweight transport idea is underrated vs heavy REST APIs.
This reads like it was written by AI. I don't understand how it provides any real security if the "guardrails" against prompt injection are just a system prompt telling the dumber model "don't do this"
The demo seems to be in a messed up state at the moment. Maybe it's just getting hammered and too far behind?
How do you keep it from getting prompt injected?
Oh I get it the runtimes are nice and small, you're using Claude for the intelligence. Obv
I think I'm just impressed with anthropic more than anything. Defcon would have me believe that prompt injections are trivial
lol I sent this link to my Claude bot connected to my Discord server and it started converting with nully and another bot named clawdia. moltbook all over again. I’m surprised how effortlessly it connected to IRC and started talking.
> The model can't tell you anything the resume doesn't already say.
Good observation. But I would worry that in the scenario when this setup is the most successful, you have built a public facing bot that allows people to dox you.
I wonder if this brings back demand for IRC clients on mobile devices? ;-)
Can be significantly cheaper on a vm that wakes up only when yhe agebt works, see for e.g. https://shellbox.dev
While I am a huge fan of IRC, wouldn't be simpler to simulate IRC, since you are embedding it? Or is the chatroom the actual point? Kudos on the project!
Yeah that chat got hosed by HN as any Show HN $communicationchannel does
But relying on a Claude API so you don't really "own the stack" as claimed in the article...
That was very educational, I found out I didn't know a lot of stuff.
Lol. /nick The IRC implementation needs to be a bit more locked down. EDIT: So much fun to be in an IRC chat room - replete with trolling! Like a Time Machine to the 90's!
The model used is a Claude model, not self-hosted, so I'm not sure why the infrastructure is at all relevant here, except as click bait?
Interesting setup.
The IRC part is neat, but the tiered inference is what stood out.
How do you decide when to escalate from Haiku to Sonnet?
Super cool! Love seeing IRC in the wild.
Kudos and best of luck!
This looks like a fun project. I'm going to be that guy and spam this reminder regarding the HN submission text:
Don't post generated/AI-edited comments. HN is for conversation between humans
https://news.ycombinator.com/item?id=47340079
At the very least prompt your LLM to skip the AI-isms for "your" comments!
Curious, which API key are you using?
What on earth is the point? This is like saying you’re running wordpress on a vps? So what?
that's so fun ! how do you know when to call haiku or sonnet?
I have a 7$/yr vps 512mb ram which can run this. I have run crush from the charmbracelet team on the vps and all of it just works and I get an AI agent which I can even use with Openrouter free api key to get some free agentic access for free or have it work with the free gemini key :-)
Works very well
it's great project
Did you give your email access to a AI provider ?
Great idea and great write up!
I can tell it's vibe coded because it takes about 1 minute for a message to appear.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
"It has access to email, deeper personal context [...] If it gets compromised, the blast radius is an IRC bot with a $2/day inference budget."
Dunno, if it gets compromised it has access to ironclaw. So the blast radius is email access and access to personal data. Depending on the setup the blast radius could even be 'the attacker removed the api limits by resetting password and incurred astronomic costs' or worse.
Just tried it, its a public lobby where people see each others questions?! Now the blast radius became 'hosting a public hub that was used to share CP and other illegal materials'