The article dharmab linked is pretty well known and I think does a fair job of describing one of the key differences. Redundancy, reliability, durability and safety are not all the same thing in vital systems. In web systems and many other types of systems which are software driven, being redundant is good enough, not true in vital systems.
It is critical each device being designed must define what it means for that device to be reliable, redundant (if necessary), durable and safe. In some medical systems failing to a known safe condition is all that is required, in others the device needs multiple redundant systems to prevent a failure under a defined set of normal circumstances.
In health care failure of systems does happen, when it happens there are processes and procedures defined to manage them generally. In almost every case when a machine fails there is a manual method to take over. I've not worked on the radiological side (outside of some CV work), but I would assume they have separate systems monitoring exposures and causing fail safe shutdowns etc. That is generally the requirement (and common pattern) for any medical device that isn't supporting life, e.g. if it fails it needs to fail in a known safe state.
Honestly this is a huge topic I could ramble for a long time about, but in the end it comes down to defining requirements, defining outcomes, defining testing, testing and good processes to handle failure conditions.
If the power goes out and the backup generator fails during a procedure, people take out their cell phones and use the flashlight. If the patient is intubated, you can manually ventilate the patient.
If the equipment you need for an emergency procedure is not working, there is often an older, lower tech method to do the same procedure, or at least stabilize the patient.
If the electronic medical record system goes down, you can use pen and paper. Those records are then scanned in or entered into the EMR when it's back up.
Unexpected things happen a lot in medicine, so it's good to always have contingency plans.
Source: I'm an interventional radiologist.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4971270/
The paper seems to suggest this has happened quite a few times.
There's no "100% fail proof" solution, it's about determining the modes of failure and addressing them individually and combined, minimizing the risk and defining an acceptable level of it. If you accept that failures are inevitable, which they are, some are likely, some vary rare, you can prepare for them via redundancies, fault tolerant design, etc.. It's also about doing proper system design and performing certain methodologies such as "Failure modes, effects, and diagnostic analysis" (FMEDA)[1], "Fault Tree Analysis" (FTA)[2] and accounting for those.
There are standards like IEC 61508[3], or its automotive adaption ISO 26262, with which certain engineering disciplines and fields must be audited against in order to pass certifications and be able to market the product. In case of ISO 26262 it's not mandatory (will be soon), but good luck explaining any judge or jury why are you the only company in the existence not applying it in your vehicle design.
[1] https://en.wikipedia.org/wiki/Failure_modes,_effects,_and_di...
Further reading: ISO 13485 (quality system) ISO 14971 (risk management) IEC 60812 (FMEA)
Basically, you work with low-level languages, be intentional about everything, and prove that the code you wrote works exactly as intended.
That im probably confusing some
But didn't one hospital in India just run out of oxygen in their wall supply? Everyone died.