Talk about anything you think relevant: what the tool does, the language written in, the libraries used, the OS written for, the process, the results, issues, learnings, users, etc.
To help make the recovery somewhat less manual, I've written a python script which combines the information produced by these tools:
* istat is invoked to identify where on the disk information about files and directories (and the MFT) resides. The map produced by ddrescue is used to avoid interacting with objects for which some or all data has not been recovered from the dodgy disk.
* fls is invoked to get directory listings
* icat is invoked to recover files where ddrecover's map shows that all required data has been read
* the outputs of the above are sanity checked first to prevent "recovering" corrupted data
* the script is idempotent and may be run repeatedly; since ddrescue is designed to incrementally update the target image and map, new runs of the script will automatically scan newly read locations and recover any newly recoverable files. fls and istat are super slow, so their outputs are cached to aid with this.
* the script produces a list of areas of disk surface, of configurable size (e.g. 10G, 1G, 100M, whatever), sorted by amount of data that files/directories need that has not yet been read, and also marked to indicate presence of known bad blocks, to help prioritise scanning effort
* a wishlist of files/directories can be provided; if so, the above map is restricted to the wishlist and also the set of ddrescue commands required to read just the data needed for the wishlist is provided
* files that include known bad blocks do not contribute to the above maps, to avoid unnecessary wear+tear for objects already determined to be unrecoverable
It started out as a hacky single-purpose tool but now I'm considering cleaning the whole thing up and shoving it in github once I'm done.