You can formally verify software/algorithms for correctness, but formal verification is difficult and takes a long time.
So why can't we do it in piecemeal? Imagine something like StackOverflow, but instead of posting a question, you post a formal specification. The answers to your post are then formally verified pieces of software.
The advantage of this approach would be that the results would be useful forever. Right now we're always building on foundations of sand, because we can never really be sure that our algorithms are correct/optimal.
You could then go even further and connect the verified algorithms/components from this hypothetical StackOverflow. So instead of posting a new formally verified algorithm, you piece it together with parts that have been individually formally verified. That way you only have to verify the new additions.
This way we could build a database/repository of "eternal" software. Eternal in the sense that once a problem is solved, it's like a mathematical proof, it's solved forever (as long as the verification was performed correctly).
You could even add more things to the verification step, like performance analysis - that way we could immediately tell for each of these algorithms/components, how fast they perform.
This of course wouldn't solve software development - instead it would shift the challenge to correctly formulating the problem in a formal specification.
Is this possible? If so, why isn't this being done?
2) it's a pretty rare skillet with specialist tooling, that doesn't really provide the itch scratch of most open source
3) "correctly formulating the problem in a formal specification" is really hard, given the inability of most stack overflow posters to formulate questions rigorously
4) for almost all projects, we value mutability over eternity
Excerpt: "The incredible progress here is evident in the IronFleet project: by using advanced SMT solvers and a cutting-edge verification language, Microsoft was able to write 5,000 lines of verified Dafny code in only 3.7 person-years! That’s a blazing-fast rate of four whole lines a day."
[0] https://hillelwayne.com/post/why-dont-people-use-formal-meth...
That said, my favorite verified software packages are seL4 (https://sel4.systems) and the CompCert C Compiler (https://compcert.org/compcert-C.html).
5) Right now it's probably harder (in most cases) to write a correct formal spec than actually implementing that spec. Otherwise formal specs would be much more popular in general.
6) The proportion of software which is actually algorithmically interesting these days is tiny. I expect something like 99%-99.999% of lines of code are basically stuff that's been done a thousand times before.
- Correctness proofs of even simple algorithms can be research grade problems that take several person years of effort from (hard to find) experts, so you're not likely to get casual Stack Overflow style contributions because of the order of magnitude difference in time and expertise required.
- The time required to verify correct software will come at a major cost to other factors like features and speed. I don't think 100% correctness is a driving factor for many domains like web servers, browsers and IDEs, so your database of correct software isn't going to be widely used until it catches up on other factors.
In all seriousness, the idea is very interesting.
I haven’t given this much thought, and maybe I don’t fully understand the proposed idea, but how would you deal with improvements/advancement to already solved things? Like when the fenwick tree was invented
Or how do you deal with competing solutions to a problem like quicksort vs mergesort?