I speak Russian and some English, but the question was about universal quantification: author declares that LLMs generate code of better quality than "any codes" he seen in his career.
LLMs got their training data from somewhere. But maybe they’re good at percolating the good code to the top and filtering the bad code.
LLMs got their training data from somewhere. But maybe they’re good at percolating the good code to the top and filtering the bad code.