All the important distributed algorithms were formally verified using proof assistants. The entire field of "formal verification" is nothing but writing proofs to verify correctness of software & hardware. There are tons of lectures on formal verification.
1. There are non-constructive proofs, ie proofs that show that something exists, but do not provide an example (but rather derive a contradiction from the assumption that it doesn't exist, for example). Those are not really akin to running code. The Banach-Tarski paradox is a particularly striking example: You can decompose a ball into 5 pieces and reassemble them into 2 balls of the same size of the original.
https://en.wikipedia.org/wiki/Constructive_proof
https://en.wikipedia.org/wiki/Banach–Tarski_paradox
2. Don Knuth famously sent a colleague some code with the warning "Beware of bugs in the above code; I have only proved it correct, not tried it."
https://inspirehep.net/files/20b84db59eace6a7f90fc38516f530e...
using a wavepacket basis parameterized in phase space. I wrote extensive unit tests to confirm everything, I even know I got the signs right.
Errors are really common in “mathematical physics” papers, some of the papers that inspired that work had errors in the middle of long calculations that got past the peer reviewers and probably everybody who didn’t try to reproduce the results.
Although the calculation was correct, the result wasn’t as useful as we’d hoped because we had no idea how to calculate the Maslov index because a periodic orbit in classical mechanics is no longer a periodic orbit when you consider the change in the shape of the wavepacket and with no periodic orbit there is no topological invariant. My thesis advisor and another student were able to reuse the math for something where they didn’t need the Maslov index so it wasn’t a complete loss.
When you tell a mathematician "this function takes an integer" or "that function always returns a string" they will immediately jump to looking for counterexamples to see if they can break it. They do this because they know that one counterexample breaks a proof. Programmers who think that way tend to produce fewer bugs imo.
Your ability to catch errors tends to be closely linked to the amount of time you've spent working in a particular area - because you build up a mental suite of test cases to check every claim against. One mistake I see occasionally is jumping into a brand new area of math and moving just as quickly as you would in the old area where you had more expertise, making huge blunders every step of the way. When you enter a new area, you have to go really excruciatingly slowly as you build up the intuition and suite of examples.
Almost every math paper contains an error or two - they are usually easy for an expert to fix (the equivalent of leaving out a semicolon at the end of a line in C++, takes only a moment to fix if you have familiarity with the language - but incredibly frustrating to a user who doesn't know how to make the easy fix).
Tests in programming are more akin to double entry book-keeping in accounting: you specify the same program twice (not really, you usually have manually precalculated a few cases, but that's roughly the point).
As such, mathematical claims (theorems, lemmas or anything, really) requiring proofs are "integration tested" in applying them elsewhere (basically, they give reasonable results). This implicitely tests their proofs too.
For sure, there are probably a lot of fringe theorems based on invalid proofs.
That is the reason more people find programming enjoyable but struggle with mathematical proof writing.