>He doesn't use ZFS as his filesystem
He doesn't use ZFS as his filesystem
>he doesn't just reiserFS
>Not using APFS
Loonix poorfag detected
two 375G drives and one 3T drive. What does that make me?
>not using a bunch of scattered disks, but maintaining a list of torrent files on each hdd, so you can re-download everything if it ever fails
>Take HFS
>Add checksums and dedup
>No temporal snapshots, subvolumes, permission delegation, vdevs, round robin, L2 ARC, hot spares, Slog, transparent compression & encryption
Am I supposed to be impressed?
Gay.
>not ReFS
KEK
What would I gain from using ZFS instead of ext4?
ZFS is dead, Oracle killed Solaris. Meanwhile S2D/CSVFS_ReFS is still developed commercially and isnt limited to a single server.
Try converting to BTRFS for a taste.
>just a bunch of drives
You're better off managing all of the drives yourself. They're a little better than a RAID0 as far as surviving drive failures but if you just used the drives individually you could make actual backups of important files and not have to worry about drive failures at all.
BTRFS IS THE SUPERIOR FILESYSTEM FOR SUPERIOR INDIVIDUALS
UNF SNAPSHOT ME BABY AND DISREGARD RAID 5 & 7
>He doesn't use a Ceph cluster
you dont either
>Muh ZFS motherfucker
So how do you add more storage to an existing array without losing anything? Oh wait you don't you simply buy another stack of disks and create a whole new array.
Meanwhile
btrfs device add /dev/sdc /mnt
btrfs filesystem balance /mnt
>you simply buy another stack of disks and create a whole new array.
Yes and? If you're adding a single disk to an array at a time it means you're poor and ZFS isn't for you. Go use your welfare RAID somewhere else.
So you can't replace a single disk on zfs?
You better run that shit on top of dmraid then.
You can replace a failed disk with one of the same capacity but once you make an array in ZFS you are committed to that number of disks in it, no more or less.
BTRFS RAID is also happy with different size disks. If you have a 3TB, 2TB and 1TB disk then BTRFS RAID1 can give you 3TB of usable space no worries.
Wow, I don't think even the built in windows raid has that limitation, ZFS is so old.
first windows raid is older than zfs, is not even comparable to zfs, and as a sw raid tool is trash anyways. use storage spaces if you want to pool disks in windows.
Second, a zpool can have vdevs of varying sizes, it doesn't care. you can expand your pool with a 5 disk vdev and then a 10 disk vdev later on. its not ideal from a performance perspective but you can do it.
What you can't do is add more disks to an existing vdev. a vdev of 4 disks will always have 4 disks. you also cannot remove a vdev once it has been added to a pool.
ZFS is enterprise software designed for enterprise use cases, it assumes that there will be planning and design before the storage pool is even created so administrators don't end up painting themselves into a corner in future expansions of the filesystem, and businesses buy disks in bulk and expand in bulk, not one at a time like consumers usually do. if you really care about your data then its the gold standard, but you gotta plan it out.
>Meanwhile
>btrfs device add /dev/sdc /mnt
>btrfs filesystem balance /mnt
btrfs still eats data an order of magnitude more than even HFS+, this is with multiple copies of data, data integrity, etc and it still has a tendency to just shit the bed. Its developers still recommend users use the latest kernels so as to receive bugfixes for said instabilities. SUSE is the only distribution that uses btrfs as a default, but then only utilize a small subset of its capabilities (snapshotting, subvolumes) so as to avoid any possible instability.
It really is a damn shame that its so bad too, because being able to change raid levels on the fly is amazing.
>no seeders