I would focus on mount options that limit writing (e.g., relatime/noatime) or putting ~/.cache on tmpfs.
In my experience ~/.cache gets the most frequent writing during normal desktop usage. A lot of applications ignore XDG standards and create their own snowflake folder directly in $HOME. You might want to watch for and replace those making a lot of writes with a symlink to where they belong. (This quickly became a frustrating battle that I lost).
Auditd, can, for example, track every write. Track it over a good sample period of typical use, then make whatever changes are needed. Might be database tuning, moving specific files to tmpfs, changing the way you do backups, reducing writes to syslog, changing fs mount options, etc.
Auditd is a little complex, but it's fairly easy to find write-ups on how to monitor writes and generate use reports.
See https://wiki.archlinux.org/index.php/Improving_performance#R...
But I think you have to assess the crash resistance and repairability of filesystems, not just worry about write amplification. I think there's too much made about SSD wear. The exception are the consumer class of SD Card and USB flash, those are junk to depend on for persistent usage, best suited for occasional use, and all eventually fail. If you're using such flash, e.g. in an embedded device, you probably want to go with industrial quality flash to substantially improve reliability.
Consider putting swap on zram [4] or using zswap [5]. I've used both, typically with a small pool less than 1/2 of RAM. I have no metric for clearly deciding a winner, either is an improvement over conventional swap. Perhaps hypothetically zswap should be better because it's explicitly designed for this use case; where zram is a compressed RAM disk on which you could put anything, including swap. But in practice, I can't tell a difference performance wise.
[1] https://arxiv.org/abs/1707.08514
[2] https://lwn.net/Articles/520829/
[3] https://www.usenix.org/conference/fast13/technical-sessions/...
[4] https://www.kernel.org/doc/Documentation/blockdev/zram.txt https://github.com/systemd/zram-generator
While there are big gaps of write amplification for metadata writes, on macro benchmarks all filesystems have similar results.
btrfs has the biggest WAF, but you can enable compression globally and I suspect that difference alone will make it come ahead of others.
[0] Analyzing IO Amplification in Linux File Systems https://arxiv.org/abs/1707.08514