ZFS

What the fuck is this thing. I keep hearing people talk about it like it's the best filesystem in the universe. explan to a brainlet what the hype is about?

bimp

Can you Google?

yes but I wanted to make a thread
Hi!

There is no hype. It was created 15 years ago. You just started paying attention to it now.

It's a good file system for servers that is hard to integrate with linux due to license problems. Works fine on FreeBSD.

>like it's the best filesystem in the universe
It is

why?

Use ReiserFS instead, it's killer.

Goddamn it carlos

>Eats your ram

Btrfs is shit , do not use it if you value performance and data integrity

ZFS is a Solaris and now BSD pet project. It's a filesystem with many features that sound cool.

But the thing is, almost nobody seriously (as in, big data style, data center, ...) uses it because virtually everything it does is useless when you're using more modern distributed / managed storage.

And even at home you should think about whether it's a good solution. It scales poorly, management is annoying (can't add drives to the arrays with that "cool" RAIDZ3). And on top of all of this, the features need extremely significant processing power and RAM - a lot more than you'd think from the competing things like "conventional" software RAID solutions like mdadm on Linux. Which, well, also scale and manage better in general.

What Is ZFS?
ZFS is a copy-on-write (COW ) file system that merges a file system, logical volume manager, and software RAID. Working with a COW file system means that, with each change to the block of data, the data is written to a completely new location on the disk. Either the write occurs entirely, or it is not recorded as done. This helps to keep your file system clean and undamaged in the case of a power failure.


ZFS Advantages

Simplified Administration
Proven Stability
Data Integrity
Scalability


ZFS Limitations

The 80% or More Principle
As with most file systems, ZFS suffers terrible performance penalty when filled up to 80% or more of its capacity.

Limited Redundancy Type Changes
Except for turning a single disk pool into a mirrored pool, you cannot change redundancy type. Once you decide on a redundancy type, your only way of changing it is to destroy the pool and create a new one, recovering data from backups or another location.

> Simplified Administration
That is very relative. Not being able to add a drive to a RAIDZ without copying it all to a backup, deleting and recreating creates big administration issues.

The competition can often do this. Some of the competition even can do erasure coded arrays over differently sized drives.

And it's not like every CLI other than ZFS' is difficult or bad either, most are ~the same.

> Scalability
Definitely not. It scales like absolute ass. Yes, you could address a Zettabyte or whatever because it uses big numbers, sure.

But in reality, your RAIDZ2 array with 20 drives on amazing controllers on an amazing machine and a SSD cache to improve performance and what not runs at the speed of maybe 5 drives. Terrible scaling.

To begin with, it's the only filesystem (other than btrfs quasivaporware) that cares to checksum your data. Also tons of other features, but that's my number one.

Branching off OP's question, is FreeNAS the best option for a home server?

I want to move all the hard drives from my desktop to a different room so it's quieter, but still have access to them across the network (with Windows' "map network drive" feature).

It's an option, but it's ultimately the underdog to Linux even on NAS and home servers.

As you described your use (running Samba), it doesn't really matter.

Any recommendation on distros?

I don't need a ton (really, any) whiz-bang features, I just want something easy to set up and allow me to upgrade to a RAID 1 or equivalent setup once I get a second drive.

>and allow me to upgrade to a RAID 1 or equivalent setup once I get a second drive.
Not sure if this is more of a motherboard/hardware question, though. Never dealt with software nor hardware RAID before.

Any, actually. Well, maybe you want the binary BSD or Linux distros, and for the latter possibly not the MOST obscure ones.

Everything you're likely to even just see has networking and samba.

> Never dealt with software nor hardware RAID before.
On Linux, I strongly recommend just the typical software mdadm RAID. It's used "everywhere", stable, and generally nice to the point that hardware RAID is actually not used that often anymore.

On BSD, I'd guess you might do a ZFS mirrored arrangement.

It is actually pretty fucking dank.

Hi user OwO

Nas4Free or OMV with (zfs module) are superior.

OMG u found me!
>logical volume manager
so I can use this in place of LVM? What would be the benefit of this over that?

In what ways is it better than just using Linux?

Its all shit just use ext4 until a new Filesystem is default for 90%+ of distributions

FreeNas is a joke, while it may look cool on paper, in actuality it sucks. First; the docs straight up tell you; don't try this shit unless you got least 16GB of ram. Yeah you can get by with less but performance suffers. 2nd, the web GUI is lame. Windows, just works. 3rd, my "data is bullet proof" meme. Just wrong, nothing is perfect. You can get the same protection to make sure your raid data is written safely with a UPS attached to your server. Plus it protects against spike/surge/pain full raid rebuilds during unplanned power loss. Large business/data centers have been using UPS for years. I'm no expert, but I'd think there must be some rational use for them otherwise those large businesses would not use/buy them. NTFS works. Use a UPS, combined with Raid your data will be fine. keep backups of your shit. Use good quality parts, not used shit you get from e-bay. Keep your server cool. Drives generate heat, more drives you add, the more heat you generate. Heat kills drives. 60c is max most drives are made to operate at.

So;
1. Get a ups
2. Raid is fine
3. Keep backups
4. Keep temps at or under 60C.
5. NTFS/Windows Server - Just works.

still waiting for this. So does this mean I could use ZFS instead of LVM + whatever filesystem? What would be the benefit?

I myself am not particularly impressed with monolithic designs like ZFS or Btrfs. I get that it's nice to be able to do things like setting RAID-level per file or per directory, but it just seems horribly misguided to put RAID+LVM+FS in one monolithic codebase. I'd rather use a stack where each part does one thing and does it well. Hence why I use mdraid+lvm2+xfs.

>mdraid+lvm2+xfs.
sounds comfy ^.^

so there's really no special benefit to using ZFS/Btrfs over LVM + ext4/xfs? What is it that people love about these so much?

>no special benefit
I did mention at least one benefit: You can set RAID level per file/directory. It would be pretty nice to be able to exclude muh 6 TB of korean slideshows from being mirrored, for example.

Other than that, though, I'm not really sure what the advantages would be. COW snapshots are often mentioned, but LVM can do that too.

RaidZ is cool

CoW Snapshots
Checksumming of everything
Multiple levels of caching that you can control
Compression on your datasets, if you want, with various levels
zfs send/recieve to back up everything to another server