I don't think there's intention. And yes, its output is defined by its limits. But it's not just the context, is it? Their coding style is, above all else, a result of an algorithm and input. The training data, the reinforcement, the model design, the tuning, the prompt, the context. Change any one of those things and the code changes. They are a system, like an ecosystem. Let water flow and it finds its own path. But try to dam it and it creates unintended consequences. I think what we're going to find is some of our rules apply more to a human world than an LLM world.