The trouble with formal specification, from someone who used to do it, is that only for some problems is the specification simpler than the code.
Some problems are straightforward to specify. A file system is a good example. The details of blocks and allocation and optimization of I/O are hidden from the API. The formal spec for a file system can be written in terms of huge arrays of bytes. The file system is an implementation to store arrays on external devices. We can say concisely what "correct operation" means for a file system.
This gets harder as the external interface exposes more functionality. Now you have to somehow write down what all that does. If the interface is too big, a formal spec will not help.
Now, sometimes you just want a negative specification - X must never happen. That's somewhat easier. You start with subscript checking and arithmetic overflow, and go up from there.
That said, most of the approaches people are doing seem too hard for the wrong reasons. The proofs are separate from the code. The notations are often different. There's not enough automation. And, worst of all, the people who do this stuff are way into formalism.
If you do this right, you can get over 90% of proofs with a SAT solver, and the theorems you have to write for the hard cases are often reusable.
> Some problems are straightforward to specify. A file system is a good example.
I’ve got to disagree with this - if only specifying a file system were easy!
From the horse’s mouth, the authors of the first “properly” verified FS (that I’m aware of), FSCQ, note that:
> we wrote specifications for a subset of the POSIX system calls using CHL, implemented those calls inside of Coq, and proved that the implementation of each call meets its specification. We devoted substantial effort to building reusable proof automation for CHL. However, writing specifications and proofs still took a significant amount of time, compared to the time spent writing the implementation
(Reference: https://dspace.mit.edu/bitstream/handle/1721.1/122622/cacm%2...)
And that’s for a file system that only implements a subset of posix system calls!
> Now you have to somehow write down what all that does.
I think the core difficulty is that there's no way to know whether your spec is complete. The only automatic feedback you can hope to get is that, if you add too many constraints, the prover can find a contradiction between them. But that's all (that I'm aware of, at least).
Let's take an extremely simple example: Proving that a sort algorithm works correctly. You think, "Aha! The spec should require that every element of the resulting list is >= the previous element!", and you're right -- but you are not yet done, because a "sorting algorithm" that merely returns an empty list also satisfies this spec.
Suppose you realise this, and think: "Aha! The output list must also be the same size as the input list!" And again, you're right, but you're still not done, because a "sorting algorithm" that simply returns inputSize copies of the number 42 also satisfies this new spec.
Suppose you notice this too, and think: "Aha! Every element in the input should also appear the same number of times in the output!" You're right -- and now, finally, your spec is actually complete. But you have no way to know that, so you will likely continue to wonder if there is some additional constraint out there that you haven't thought of yet... And this is all for one of the tidiest, most well-specified problems you could hope to encounter.
> is that only for some problems is the specification simpler than the code.
Regardless of the proof size, isn't the win that the implementation is proven to be sound, at least at the protocol level, if not the implementation level depending on the automatic theorem prover?
I will just float this idea for consideration, as I cannot judge how plausible it is: Is it possible that LLMs or their successors will soon be able to make use of formal methods more effectively than humans? I don't think I am the only person surprised by how well they do at informal programming (On the other hand, there is a dearth of training material. Maybe a GAN approach would help here?)
I agree, writing and maintaining specifications can be cumbersome. But I've felt that learning how to write formal specifications to keep the code in check has made me a better programmer and system architect in general, even when I do not use the formal spec tooling.
>The trouble with formal specification, from someone who used to do it, is that only for some problems is the specification simpler than the code.
I think most problems that one would encounter professionally would be difficult to formally specify. Also, how do you formally specify a GUI?
>The proofs are separate from the code. The notations are often different. There's not enough automation. And, worst of all, the people who do this stuff are way into formalism.
I think we have to ask what exactly are we trying to formally verify. There are many kinds of errors that can be caught by a formal verification system (including some that are in the formal spec only, which have no impact on the results). It may actually be a benefit to have proofs separate from code, if they can be reconciled mechanically and definitively. Then you have essentially two specs, and can cross-reference them until you get them both to agree.
I have been formally verifying software written in C for a while now.
> is that only for some problems is the specification simpler than the code.
Indeed. I had to fall back to using a proof assistant to verify the code used to build container algorithms (e.g. balanced binary trees) because the problem space gets really difficult in SAT when needing to verify, for instance, memory safety for any arbitrary container operation. Specifying the problem and proving the supporting lemmas takes far more time than proving the code correct with respect to this specification.
> If you do this right, you can get over 90% of proofs with a SAT solver
So far, in my experience, 99% of code that I've written can be verified via the CBMC / CProver model checker, which uses a SAT solver under the covers. So, I agree.
I only need to reach for CiC when dealing with things that I can't reasonably verify by squinting my eyes with the model checker. For instance, proving containers correct with respect to the same kinds of function contracts I use in model checking gets dicey, since these involve arbitrary and complex recursion. But, verifying that code that uses these containers is actually quite easy to do via shadow methods. For instance, with containers, we only really care whether we can verify the contracts for how they are used, and whether client code properly manages ownership semantics. For instance, placing an item into the container or taking an item out of a container. Referencing items in the container. Not holding onto dangling references once a lock on a container is released, etc. In these cases, simpler models for these containers that can be trivially model checked can be substituted in.
> Now, sometimes you just want a negative specification - X must never happen. That's somewhat easier.
Agreed. The abstract machine model I built up for C is what I call a "glass machine". Anything that might be UB or that could involve unsafe memory access causes a crash. Hence, quantified over any acceptable initial state and input parameters that match the function contract, these negative specifications must only step over all instructions without hitting a crash condition. If a developer can single step, and learns how to perform basic case analysis or basic induction, the developer can easily walk proofs of these negative specifications.