If you're trusting core contributors without AI I don't see why you wouldn't trust them with it.
Hiring a few core devs to work on it should be a rounding error to Anthropic and a huge flex if they are actually able to deliver.
It's extremely tempting to write stuff and not bother to understand it similar to the way most of us don't decompile our binaries and look at the assembler when we write C/C++.
So, should I trust an LLM as much as a C compiler?
What if it impairs judgement?
I trust people to understand the code they write. I don't trust them to understand code they didn't write.