The AI learned nothing, once its current context window will be exhausted, it may repeat same tactic with a different project. Unless the AI agent can edit its directives/prompt and restart itself which would be an interesting experiment to do.
I think it's likely it can, if it's an openClaw instance, can't it?
Either way, that kind of ongoing self-improvement is where I hope these systems go.
I think it's likely it can, if it's an openClaw instance, can't it?
Either way, that kind of ongoing self-improvement is where I hope these systems go.