datahoarder
Who are we?
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
We are one. We are legion. And we're trying really hard not to forget.
-- 5-4-3-2-1-bang from this thread
view the rest of the comments
I'm not sure if I understood your question correctly, but perhaps it'd be more comfortable to use the native ZFS sync mechanism over the network. It's "just snapshots", but in the process the whole initial dataset gets synced as well
A very simple ZFS to (Raspi+ZFS) setup is shown here, it relies on cron: https://blog.beardhatcode.be/2021/05/raspberry-pi-zfs-replication.html
If you have two e.g. TrueNAS servers thta run ZFS you can skip sanoid/syncoid and just use zfs send from one server to another directly, using the network address
Yeah, I guess it may be risky to remove drives from pool, so maybe it would be better to build to just move the whole secondary pool as the other commenter pointed out (at least for the first time, smaller increments should be easier to handle). But do you think my strategy with snapshots as backup is good overall or should I use something else?