I think this is a horrible strategy. Data needs to be verified periodically and migrated to new formats.
I have files that started on floppy disk in 1991. Over the years they migrated to hard drives, QIC tape, PD phase change optical discs, CD-RW, DVD+RW, and back to hard drives. The file systems have changed from FAT16, ext2, ISO9660, UDF, ext3, ext4.
I strongly suggest creating a list of checksums. A simple for loop on every file running md5sum, sha256sum, or similar and storing along with the filenames. You can then run again, compare, and see if all of the data is still intact.
Some filesystems like btrfs and zfs will do the checksum calculation every time you read the file.
Personally I use ext4 and run cshatag on all files every 6 months. This stores a timestamp and sha256 checksum as ext4 extended attribute metadata. If the file contents have changed but the timestamp has not then it will report corruption.
https://github.com/rfjakob/cshatag
You can also create parity info with Parchive. These are basically "sidecar" files with the same file name as the main file with a suffix. If there are errors in the file you can use parchive with the parity data to reconstruct the file. You can also adjust how much parity data to create (more parity takes more space)
5 years?
10 years?
50 years?
How many years do you mean by "long-term"?