I wouldn't agree that LLMs are a higher level of abstraction, but I've found they do help me think at a higher level of abstraction, by temporarily outsourcing cognitive load.
With changes like substantial refactors or ambitious feature additions, it's easy to exceed the infamous "seven things I can remember at once":
* the idea for the big change itself
* my reason for making the change
* the relevant components and how they currently work
* the new way they'll fit together after the change
* the messy intermediate state when I'm half finished but still need a working system to get feedback
* edge cases I'm ignoring for now but will have to tackle eventually
* actual code changes
* how I'm going to test this
Good lab notes, specs etc can help, but it's a lot to keep in mind. In practice these often turn into multi person projects, and communication is hard so that often means delay or drift. Having an agent temporarily worry about * wiring a new parameter through several layers
* writing a test harness for an untested component
* experimentally adding multibyte character support on a branch
frees up my mental bandwidth for the harder parts of the problem.The main benefit is to defer the concern until I have a mostly working system. Then I come back and review its output, since I'm still responsible for what it delivers, and I want better than "mostly working".
This is what I've found to be very successful for me. My flavour of ADHD has historically made it hard for me to start new projects as I get very stuck on all of the little details from the start, while also thinking about the high level aspects.
Being able to spend my energy on the architectural decisions and validate my understanding before spending time on optimising the internals has actually allowed me to follow through with some of my designs.
Experimentation is then faster. If the data model wasn't good enough, I can actually experiment with it immediately, before we accidentally ship something to production and then have to deal with a very annoying data migration problem. The exact code doesn't matter to begin with when we just want to make sure the data is efficient to decode and is cache friendly.
I recently built a project I had in my mind for 3 years but could never work on because all the individual components were overwhelming. It involved e2e encryption, consensus, p2p networking, CRDTs, and API design. It was very nice to see it come together. The project ended up failing due to some underlying invariant, so it was nice to validate that and finally get it out of my head.