So the question is, why is every thread saying ZFS has a steep learning curve? Or does that relate to large scale/enterprise use? I honestly worry that I do something wrong or ignore things that could lead to data loss.
Whether you're a home user or even in the enterprise space, all of those things were typically separate. You had a raid abstraction layer, then you had a volume manager on top of the raid, then you created partitioning (maybe) on the volumes, then you created file systems on top of those, and so forth. This was all very different tooling depending on what you wanted to do. LVM has kind of moved in the direction of ZFS where it's handling more of this stuff automatically behind its tooling. Although in LVM's case it is still very visible as to what it's doing and that it's piecing these separate things together for you.
So if you've learned on systems where you're looking at hardware block devices that you establish a rate of and then create partitions and file systems on top of, ZFS is a very big departure from that. I use ZFS for some things and I also use LVM as well.
Having used both I generally prefer LVM although when you get into using it for managing raid and error correction and all of those things it also has a fairly steep learning curve. The reason I like LVM is because the expandability of the system is a lot easier. With lvm I can slap in 20 disks of different sizes and all of that space available for me to use. I can decide that some volumes and need to have four levels of mirrored redundancy and LVM will place it where it needs to be. I can decide other volumes are fine with eight volume raid 6. Still other volumes may be totally fine being a stripe set across multiple disks. LVM handles all of this magic for me and I still get some of the chief benefits that I like about ZFS which is data integrity. The biggest thing about LVM over ZFS under Linux is that it's just included. Maybe ZFS will become unencumbered someday and we'll get more traction.