I'm familiar with source control like git and if I could check all my files in to some sort of super git repo then a git status would show me if any of my files have become corrupt. Git isn't great at large files though and git LFS seems complicated and possibly overkill for this situation. I only need the "has a file changed" part so I'm wondering if anyone has a good way to handle this situation. I don't need versioning as I have that already with my backups.
Another method would be to use rsync in "--dry-run" and see what changed that way between your backups. If you have sets of files you know should never change, you could filter on them as a canary.
I wrote up a lengthy description of the problem and several different solutions here:
https://photostructure.com/faq/how-do-i-safely-store-files
I personally use a Synology and an Unraid box, both using btrfs, with scheduled data scrubs and periodic SMART health checks.
I also rsnapshot to an external drive, just formatted as ext4, as Yet Another Copy of my stuff.
Look into par2 files (https://github.com/Parchive/par2cmdline).
Arrange your 'backup data' in some logical way, then generate par2 files for logical "groups" (i.e, per directory containing a set of files, etc.) with an amount of 'redundancy' you feel comfortable with.
You can then periodically run par2 scans of the static backups to both detect changes, and to repair the changes (provided the changes are not larger than the amount of redundancy you originally requested).