Isn't code that you fail to understand literally a sign that its worse?
I should also add that I am not claiming to be a particularly great programmer. I have never worked at FAANG, and I haven't had much exposure to the kind of massive codebases those engineers deal with every day.
Most of the code I've worked with comes from Korean and Chinese startups, industrial contractors, or older corporate research-lab environments. So I know my frame of reference is limited.
When I write code, I usually rely on fairly conservative patterns: Result-style error handling instead of throwing exceptions through business logic, aggressive use of guard clauses, small policy/strategy objects, and adapters at I/O boundaries. I also prefer placing a normalization layer before analysis and building pure transformation pipelines wherever possible.
So when Codex produced a design that decoupled the messy input adapter from the stable normalized data, and then separated that from the analyzer, it wasn't just 'fancier code.' It aligned perfectly with the architectural direction I already value, but it pushed the boundaries of that design further than I would have initially done myself.
This is exactly why I hesitate to dismiss code as 'bad' just because I don't immediately understand it. Sometimes, it really is just bad code. But sometimes, the abstraction is simply a bit ahead of my current local mental model, and I only grasp its true value after a second or third requirement is introduced.
To be completely honest, using AI has caused a significant drop in my programming confidence. Since AI is ultimately trained on codebases written by top-tier programmers, its output essentially represents the average of those top developers—or perhaps slightly below their absolute peak.
I often find myself realizing that the code I write by hand simply cannot beat it
It was often much faster, and when I revisited the code later, there were cases where I realized it had moved the implementation toward a better abstraction.