So you want LLMs to write a bunch of black box code that humans won’t be able to read and reason about easily? That will definitely end well.