This policy is straightforward and shouldn't be particularly controversial (I'm sure it will be bikeshedded to death though). It basically bans the obvious stuff ("don't just drop LLM generated comments onto PRs") and allows the important stuff like LLMs writing code so long as you disclose.
edit: Wow people did not read the policy. It's literally just "if you use an LLM you are responsible for it, we will reject low quality PRs, please disclose that you have used an LLM". This is bog standard.
The discussion thread in the PR is also interesting to got through, lots of people concern in the HN discussion are already well discussed there
So...big caveat that this is still under review, so what we're talking about is a moving target, but based on what I can see, it seems considerably more nuanced than that. They basically ban LLM-authored code, with a careful carve-out to run an experiment to try to get only high-quality LLM PRs:
> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to create.
> We carve out a space for "experimentation" to inform future revisions to this policy.
Importantly, the LLM contributions must be solicited, i.e., the people responsible for reviewing the final implementation have to opt in explicitly beforehand.