Great insight. But this is sadly not applicable to interviews.
> It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem
This nails it. The point of these problems is to test your cleverness. That's it. Presenting a not-clever solution of using constraint solvers shows that you have experience and your breadth of knowledge is great. It doesn't show any cleverness.
> The point of these problems is to test your cleverness.
Last round I did at Meta it was clearly to test that you grinded their specific set of problems, over and over again, until you could reproduce them without thinking. It's clear because the interviewers are always a bit surprised when you answer with whatever is not the text-book approach on both leetcode and on the interview guide they studied.
Cleverness is definitely not high on the list of things they're looking for.
Bottom up dynamic programming algorithms require some cleverness.
All of the ones listed can be solved with a top down dynamic programing algorithm. Which just means "write recursive solution, add caching to memoize it".
For some of these, you can get cleverer. For example the coin change problem is better solved with an A* search.
Still, very few programmers will actually need these algorithms. The top thing we need is to recognize when we accidentally wrote a quadratic algorithm. A quick scan of https://accidentallyquadratic.tumblr.com/ shows that even good people on prominent projects make that mistake on a constant basis. So apparently being able to produce an algorithm on the test, doesn't translate to catching an algorithmic mistake in the wild.
When I interview with problem solving problems, the point is to understand how the candidate thinks, communicates, and decomposes problems. Critically, problem solving questions should have ways to progressively increase and decrease difficulty/complexity, so every candidate "gets a win" and no candidate "dunks the ball".
Interviewers learn nothing from an instant epiphany, and they learn next to nothing from someone being stumped.
Unfortunately, this is why we can't have nice things. Problem solving questions in interviews can be immensely useful tools that, sadly, are rarely usefully used.
Constraint solvers are also often not applicable to the real world either.
Many formulations scale in a way that is completely unusable in practice.
Knowing how to get tools like Z3 or Gurobi to solve your problems is it's own skill and one that some companies will hire for, but it's not a general purpose technology you can throw at everything.
This post is the unironic version of "FizzBuzz in TendorFlow", where just because you have a big hammer doesn't mean everything is a nail. And I say that as an enjoyer of bug hammers including SMT solvers.
>The point of these problems is to test your cleverness.
No it's just memorization of 12 or so specific patterns. The stakes are too high that virtually everyone going in will not be staking passing on their own inherent problem solving ability. LeetCode has been so thoroughly gamified that it has lost all utility of differentiability beyond willingness to prepare.
No its not a measure of cleverness. Its about whether you can break down problems and apply common methods. Thats the entire job. Its a learnable skill and honestly resisting learning because of personal biases is a red flag in my book.
The point is to test whether or not you put in the time to sharpen common patterns and also to test your communication ability
>The point of these problems is to test your cleverness.
In my experience, interviewers love going to the Leetcode "Top Interview 150" list and using problems in the "Array String" category. I'm not a fan of these problems for the kind of jobs I've interviewed for (backend Python mostly), as they are almost always a "give me a O(n) runtime O(1) memory algorithm over this array" type challenge that really doesn't resemble my day to day work at all. I do not regularly do in-place array algorithms in Python because those problems are almost always handled by other languages (C, Rust, etc.) where performance is critical.
I wish interviewers would go to the "Hashmap" section for interviews in Python, JavaScript, etc., type of languages. They are much less about cleverness and more about whether you can demonstrate using the appropriate tools in your language to solve problems that actually do resemble ones I encounter regularly.
There's also the problem of difficulty tuning on some of these. Problem 169 (Majority Element) being rated "Easy" for getting a O(n) runtime O(1) memory solution is hilarious to me. The algorithm first described in 1981 that does it (Boyer–Moore majority vote algorithm) has a Wikipedia page. It's not a difficult to implement or understand algorithm, but its correctness is not obvious until you think about it a bit, at which point you're at sufficient "cleverness" to get a Wikipedia page about an algorithm named after you. Seems excessive for an "Easy" problem.