On troubleshooting, either LLMs used to be better, or I'm in a huge bad luck strake. All of the last few times I tried to ask one, I've got a perfectly believable and completely wrong answer that weren't even on the right subject.
On code review, the amount of false positives is absolutely overwhelming. And I see no reason for that to improve.
But yes, LLMs can probably help on those lines.
this can absolutely happen and i've experienced it myself recently. that said id say its still better than some of the alternatives and i've had probably 60-80% luck with it if properly prompted
what models have you been using that are the least helpful?
I've found them super hit or miss for debugging. I've gone down several rabbit holes where the LLM wasted hours of my time for a simple fix. On the other hand, they're awesome for ripping through thousands of log lines and then correlating it to something dumb happening in your codebase. My modus opernadi with them for debugging is basically "distrust but consider". I'll let one of them rip in the background while I go and debug myself, and if they can find the solution, great, if not, well, I haven't spent much effort or time trying to convince them to find the problem.