logoalt Hacker News

david3289today at 1:08 PM4 repliesview on HN

This is a really interesting direction. RTS games are a much better testbed for agent capability than most static benchmarks because they combine partial observability, long-term planning, resource management, and real-time adaptation.

It reminds me a bit of OpenAI Five — not just because it played a complex game, but because the real value wasn’t “AI plays Dota,” it was observing how coordination, strategy formation, and adaptation emerged under competitive pressure. A controlled RTS environment like this feels like a lightweight, reproducible version of that idea.

What I especially like here is that it lowers the barrier for experimentation. If researchers and hobbyists can plug different models into the same competitive sandbox, we might start seeing meaningful AI-vs-AI evaluations beyond static leaderboards. Competitive dynamics often expose weaknesses much faster than isolated benchmarks do.

Curious whether you’re planning to support self-play training loops or if the focus is primarily on inference-time agents?


Replies

degeneratetoday at 2:29 PM

You would likely be interested in the Starcraft BWAPI: https://www.starcraftai.com

You can watch the matche videos from training runs: https://www.youtube.com/@Sscaitournament/videos

I don't think BWAPI has ever integrated modern AI models, but I haven't followed its progress in several years.

show 1 reply
drakinoshtoday at 9:32 PM

What a boringly bog-standard AI Comment. Why bother writing?

dmos62today at 1:23 PM

> partial observability, long-term planning, resource management, and real-time adaptation

Note, this project doesn't have that best I can tell? Its two static AI scripts having a go. LLMs generate the scripts and they are aware of past "results", but I'm not sure what that means.

__cayenne__today at 3:43 PM

Very interested in self-play training loops, but I do like codegen as an abstraction layer. I am planning to make it available as an RL environment at some point