File Systems

>massive amount of overhead
>Need a huge amount of RAM
>a single bit-flip can destroy your entire RAID array
>ECC RAM or you get fucked

Why is ZFS considered the greatest ever?

Other urls found in this thread:

btrfs.wiki.kernel.org/index.php/Status
btrfs.wiki.kernel.org/index.php/RAID56
marc.merlins.org/perso/btrfs/post_2014-03-23_Btrfs-Raid5-Status.html
ceph.com/
youtube.com/watch?v=SJB1cJfcjYI
youtu.be/SJB1cJfcjYI
jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
twitter.com/SFWRedditVideos

I don't know. I've never had problems with ext*, although only ever used ext3 and ext4, but mostly ext4. Although I am a pleb with minimal data and thus rsync suffices for redundant backups.

Why would anyone use anything other than ECC RAM for a network storage device?
Am I supposed to enjoy having to restart my server every month because the sector of memory that hosts critical OS functions got its bits flipped by background radiation?

For pooling drives, it's pretty great. The bad RAM/ECC thing is overblown, and incredibly unlikely to be an issue (much less a likely issue than two slowly failing drives in a hardware RAID array).

For your normal everyday use file system, there's not a point to it, but for large pools or network storage, it's good. Honestly, what else do you think would be better?

Because it's doing what Btrfs wants to do but without the gigantic fucking bugs. Yes, it's extremely resouce heavy, but it does what it's meant to do very well which is why it's used so much in enterprise storage.

This isn't a thing that actually happens and you should have backups anyways. ECC is just there because if you're running enterprise-tier hardware there's no reason not to apply the better safe than sorry concept. Enterprise is a weird mix of fish oil and actual legitimate improvements.

ext4 is fantastic
ext3 is kinda shit
ext2 is great if you want fast writes and don't care about anything else
zfs is gay

>2017
>claiming btrfs is unstable

You're a few years behind, m8. btrfs has been stable for some time, now

Wrong.

btrfs.wiki.kernel.org/index.php/Status

So it works, except for some shit like raid56 that you should not use anyway.

There appears to be quite a few orange boxes with "mostly ok there.

nothanksjeff.gif

I'm fine with using raid6 for storage with xfs and raid1 for my OS and /home with ext4 for now. I'm not going to switch to some experimental "mostly" ok filesystem.

As for ZFS, I have never tried it but I do have the impression that's good for enterprise scenarios where you want an array with 1000 HDDs and things like that. For casual users like me I don't think there is a huge advantage over putting a dozen disks in raid6 and using xfs.

ext3 truncated some of my shit back during writeback-gate

That's 5/6 retard.

It doesn't need a huge amount of RAM. But it will utilize all the RAM that you let it for caching purposes. ZFS is fast because on top of RAID, it will use RAM and SSDs as read and write caches. That's optional though.

if you bothered to open the page, you'd note the feature is written as "raid56"

>fish oil
What did he mean by this?

If you were familiar with Btrfs you'd know that it refers to Btrfs' own implementations of RAID5 and RAID6.

btrfs.wiki.kernel.org/index.php/RAID56
marc.merlins.org/perso/btrfs/post_2014-03-23_Btrfs-Raid5-Status.html

He meant snake oil.

Not him, but most of the scientific studies behind fish oil and health benefits conclude no significant difference.
So it's a placebo i.e. snake oil.

>>a single bit-flip can destroy your entire RAID array
>>ECC RAM or you get fucked

kek. so much for a TOP NOTCH ENTERPRISE CLASS FILE SYSTEM hurr

why BSDfags jerk off to this?

Have been running a 8 TB array w/ 1 drive of parity on ZFS for 2 years with normal non ECC ram. Zero issues or problems here.

>waah you need ECC RAM
Well if we're talking enterprise grade uses you already have it.

If you're using RAM as a read/write cache then you need ECC because RAM is prone to errors. ZFS isn't meant for home use, so it's not a problem. The alternative is to not have an arc or l2arc, which means you're missing out on a ton of performance.

What other filesystem offers arc and l2arc?

You think enterprise runs your shitty corsair UDIMMs? Lmao

>Enterprise
>not using ECC

lolwut

Though as pointed out elsewhere in the thread that whole point is bullshit to begin with.

also completely ignoring permanently degraded performance if usage goes above ~70%

no shit

anyway, who else is monitoring progress with bcachefs?
it seems to be moving along pretty quickly, and already has a couple things btrfs hasn't gotten around to, namely tiering and encryption

>>ECC RAM or you get fucked
this is true with every filesystem though and freebsd takes steps to mitigate the issue even without ecc ram

CEPH
ceph.com/

Get yourself a rack of servers, add drives, OS, use any FS you want (ext4 is good), and watch as all that junk is neatly abstracted away.

bcachefs will probably be great. But Redhat and oracle won't support it so it's dead in the water despite technically being better.

Btrfs works for me.

snapshots and CoW is pretty great.

I don't trust the RAID support yet though, I'm still using mdadm to create software RAIDs

I'm btrfs on both laptop and desktop, but for a data/fileserver I'd use ZFS, of course with an offsite backup.

>no significant difference
or
>no significant difference *as long as you have a healthy diet*

people often overlook that

ext3 doesn't have checksums, ext4 does on the latest kernels and ZFS revolves around checksums completely.

mdadm is prone to silent data corruption because it doesn't have checksums, you should use ZFS or Btrfs native RAIDs and make monthly scrubs to caught and correct errors.

>massive amount of overhead
>Need a huge amount of RAM
>a single bit-flip can destroy your entire RAID array
>ECC RAM or you get fucked

All of those are false except for the overhead one, but when your Btrfs array detects data corruption on a disk and it fixes it with the data on another disk, but then you realize it was the data on the "Good disk" that was wrong and you deleted the correct data, you will wish you were running ZFS because it keeps multiple copies of the checksums across the system.

RAID0+1. Come at me.

>massive amount of overhead
false
>Need a huge amount of RAM
sorta yes, it will use as much ram as you give it, but starving it of ram wont hurt either
>a single bit-flip can destroy your entire RAID array
false, thats the entire point of ZFS, checksums and data integratity checks to handle bit rot
>ECC RAM or you get fucked
False, using none ECC ram is the same as other file systems. The chances of having a flipped bit get written to disk its so small, all of Sup Forums has a better chance of getting laid.


tldr zfs is awesome.

They all are bloat.
Just use cached worm file server

What's best for raid0 though?

Don't worry, I have a backup solution and have no problems with a little downtime.

>"Need a huge amount of RAM"
>enabling deduplication

>a single bit-flip can destroy your entire RAID array
Isn't the whole point of ZFS that it keeps checksums to prevent exactly this issue, which traditional RAID cannot detect?

This video came out 30 minutes ago.

youtube.com/watch?v=SJB1cJfcjYI

Yes. OP is an idiot. Technically, there's an infinitesimally small chance that the same bit of ram could flip on and off at the precise moment it's copying over a file and when it's copying the duplicate of it, but the chances of that are effectively zero. Maybe once in a billion years of constant copying.

Most companies can't afford to take that risk.

>massive amount of overhead
>Need a huge amount of RAM
>a single bit-flip can destroy your entire RAID array
>ECC RAM or you get fucked

Well enterprise customer had money to burn so they see no problem

>Not using ReiserFS for killer seek times

inb4 hahah le ebin reddit pun Sup Forumsro +1 for the dr who reference xDDDDDd upboatsss

fish oil is hella good for you though.

Level1techs interviewing Allan Jude and talking about ZFS.

youtu.be/SJB1cJfcjYI

>Jude

The average Sup Forums user isn't making decisions for a company, but is - at most - making decisions for his home server.

But anyways, the point is that ZFS isn't suddenly going to self-destruct if you don't use ECC RAM.

HAMMER for the win

Most companies will use ECC ram, this thread is about home usage.

>massive amount of overhead
Redundant to point #2
>Need a huge amount of RAM
Only if you want to do data duplication. Basically no other file system even lets you do this zfs allows it but needs lots of ram.
>a single bit-flip can destroy your entire RAID array
Same with any filesystem.
>ECC RAM or you get fucked
Same hazer with any filesystem. Basically if you data gets changed in memory it is wrote bad to your hard drives.

>no other file system even lets you do this

Well, WAFL does, and apparently doesn't need that much ram on the filer node to do it; but you'll be raped by NetAPP for the license allowing you to turn it on.

After all this years it's still unstable piece of shit that will destroy your data.
>gets stuck on devices with bad sectors
>get stuck in irreversible read-only mode if only one device is present
How the fuck is this "Mostly OK"!?

I've been running a FreeNAS box with non-ECC ram for about a year. It stores over 10tb of movies. I've restarted it twice due to upgrades.

Scrubs always show no errors.

jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

[spoiler]the answer is no[/spoiler]