logoalt Hacker News

gburgetttoday at 12:41 AM0 repliesview on HN

Loved this writeup. I have built an agent for a specific niche use case for my clients (not a coding agent) but the principles are similar. ive only implemented 1-4 so far. Going to work on long term memory next, but I worry about prompt injection issues when allowing the LLM to write its own notes.

Since my agent works over email, the core agent loop only processes one message then hits the send_reply tool to craft a response. Then the next incoming email starts the loop again from scratch, only injecting the actual replies sent between user and agent. This naturally prunes the context preventing the long context window problem.

I also had a challenge deciding what context needs injecting into the initial prompt vs what to put into tools. Its a tradeoff between context bloat and cost of tool lookups which can get expensive paying per token. Theres also caching to consider here.

Full writeup is here if anyone is interested: https://www.healthsharetech.com/blog/building-alice-an-empow...