2018

>2018
>can't expand an array
>can't defrag
>suffers from severe fragmentation if you delete/move files that's been written

why is Sup Forums shilling this filesystem so much

Attached: 1513383609660.jpg (459x546, 76K)

Other urls found in this thread:

dragonflybsd.org/hammer/
patreon.com/bcachefs
en.wikipedia.org/wiki/Tux3
twitter.com/NSFWRedditGif

Legacy users who don't want to learn BTRFS maangement.
ZFS died when Oracle purchased Sun, before that though it was a very promising filesystem - even Apple intended to use it.

Is some corporation backing up ZFS? There is your answer why is shilled, or is heavily propagandized, just like XFS is backed by RedHat and force its use in wrong situations when the end users won't benefit at all.

butter fs is a meme anyway, it's not made any improvements in the last 4 years and raid5/6 is still dead

Sup Forums doesn't shill zfs
we shill the glorious bcachefs

Attached: fakenews.gif (320x180, 500K)

>RAID5/6/Erasure coding:
>This part hasn't been implemented yet - but here's the rough design for what will hopefully be implemented in the near future:

so it's useless

>why is Sup Forums shilling this filesystem so much
because it's the only halfway vetted FS with RAID6+ redundancy, COW snapshots, and serializable snapshots/deltas.

btrfs wants to be all these things too, but the incompetent devs are taking forever. at this rate, ZFS will get GPL'd before btrfs gets its shit together.

zfs is still pretty clunky even with a lot of RAM/SSD cache thrown at it, but there is not a better monolithic FS out there for those who are paranoid about data loss/corruption.

>>can't defrag
This need to be defraged?

HAMMER2 will save us (I hope)

literally who?

>RAID6+ redundancy, COW snapshots
You can get this on literally any filesystem with mdraid and LVM, and better yet, they're implemented in their own, simpler projects, making the whole less complex.
>serializable snapshots/deltas
Cute, but it's not *that* useful a feature. And if you need it, XFS has it and combines willingly with the above.

And not least, ext4/xfs/any traditional filesystem doesn't have the fragmentation-on-write problem.

DragonflyBSD filesystem
dragonflybsd.org/hammer/

I know.
my point was that it's such obscure shit that it's not going to "save us"

ext4 and xfs suffers from fragmentation, you think cramming files one after another and adding pointers is better?

>fragmentation is bad
t. hard drive babby

>ext4 and xfs suffers from fragmentation
Not very much at all in practice, due to delayed allocation. I didn't say that they don't have fragmentation, though, but just that files don't fragment deterministically and incrementally just because of using them for random access writes.

>bcachefs
Attractive project, what distro ship with this?

none, and it's not complete. far from being complete

it's being developed by one person only patreon.com/bcachefs

i'd dump 100$ each month if the author promised expandability but i don't see it anywhere in the list of upcoming features

Just use NTFS and/or exFAT you fucking neckbeards, literally who cares?

>NTFS

lol

even ext1 is better than NTRFS

It's apparently good enough for Facebook though
winky face

>it's not made any improvements
nice try ZFS shill, but btrfs got a big update at the end of last year and almost all issues got resolved, except RAID5/6 but that's pretty much broken by design
it's now production ready if you use a recent kernel

wrong

facebook uses it for SOME, not ALL of their servers, and they've confirmed that it has never been used for their data storage servers

blown the fuck out

>NTFS
>exFAT
What the fuck? Are you on windows or just stupid?

>NTFS
Stop being so anally pained that Unix won Dave. Turns out “get a byte, get a byte, get a byte” is a good idea.

You can't even use any of those assfrxses without downloading and running shady .exes fuck off autists.

>Is some corporation backing up ZFS?
Yes and no, there are a lot of SDS companies which use it like Nexenta and Tegile, but who knows if they contribute to OpenZFS or just keep their modifications private because muh trade secrets and competition.

>why is Sup Forums shilling this filesystem so much

mandatory strong block-level checksums plus all of this shit:
>RAID6+ redundancy, COW snapshots, and serializable snapshots/deltas.
keeps the bit-rot boogeyman man at bay.
All the backup in the world don't mean shit to the paranoid if they can't programmatically verify data integrity.
RAID by itself gives better uptime/availability, but it does fuck-all for discerning which block in a stripe was the one that got corrupted if a parity scrub reports a discrepancy.

they said that after 3.19 and then a few months later oops lol RAID56 arrays get horribly corrupted if you have to replace disks. then they took years to fix that.

ZFS is currently making me mad with its goddamned immutable vdevs, and I very much want something like Btrfs's balance filters. But it also doesn't have persistent reports of horrible bugs that kill the whole array. Also it can do things I'd think would be basic filesystem features, like reporting the mount of free space in a pool. Apparently the Btrfs people thought yeah, who needs that, we'd rather let people mix RAID levels between subvolumes for some goddamned reason.

Hey!
You seem pretty ZFSy
What is the way to deal with fragmentation in ZFS?
I ask because i'm considering using it for a NAS project, but I had no idea that fragmentation is an issue with this. I assumed all the *nix filesystems had this shit figured out already, like ext4 and XFS had.

not him, but you need two pools

say you have one old pool with fragmentation, and you want to get rid of it, you simply create a new pool and copy everything over at once and then you delete the old pool

it only really becomes an issue if you fill up the pool above 80% or so. That's when ZFS starts to worry more about using space efficiently than about being fast. zfs send a dataset over to another pool and recv it back to defrag it.

>What is the way to deal with fragmentation in ZFS?

Keep plenty of free space, and don't worry about it. It won't be an issue.

I don't understand. BTRFS works TODAY.

Raid 0,1,10 work perfectly fine. Raid 5,6 work with some power redundancy.

Where is the issue?

WHY AREN'T YOU UAING BTRFA. WHAT THE FUCK IS WRONG WITH YOU RETARDS?

Attached: 1517050936378-pol.jpg (400x400, 13K)

>>can't expand an array
I literally just expanded my main volume by adding a striped raidz vdev. Nice try.

RAID56 is claimed to now work, but we don't really know if it actually does. Because, remember, it's been claimed to work before and then horrible corruption bugs got found in it. If there haven't been any more fun surprises like that for a year or so, then we can say with some confidence that Btrfs RAID56 works.

Also there's only a chance of it working if you're running a bleeding-edge kernel, which has its own problems. This shit only went in to 4.15 and 4.16, you're out of luck if you're on 4.14, 4.9, 4.4, etc.

So wait, doesn't that mean you'd have to have another set of disks on standby? That seems awfully inefficient
Oh so I can just keep a smaller set of disks reserved to send over one of my zvols and get it back? not ideal, but doesn't sound quite as bad
So to use ZFS properly, you shouldn't actually fill up your storage? kinda counter-intuitive, since the whole point is to store shit.

or am I misreading a thing?

>>can't expand an array

this, along with mixing/matching drive sizes in an array is the epitome of poorfag-bait features.
being able to buy 5 or 6 drives for a new vdev/array shouldn't be the determining factor of how you build a file server.

> you'd have to have another set of disks on standby? That seems awfully inefficient
yes

>raidz
since you never told us if it's raidz1, raidz2 or raidz3, i'm going to assume you're talking about raidz3

so you had to buy 8 drives for a raidz3, which you've mirrored with your other vdev, nice "expandability", faggot

>going to assume you're talking about raidz3
raidz (with no number) can be shorthand for raidz1. afaik it accepts that as syntax when creating/adding to pools.

so you bought 3 drives to "expand"

that's not expansion, that's a restriction, and if you lose one vdev, say goodbye to the other vdevs in the pool

>say goodbye
yeah just until you replace the fucking disk
wow it's fucking nothing

you think a vdev can't spontaneously die? you think resilvering a vdev after one or more drive failures won't stress the other drives?

What the fuck filesystem doesn't die after it loses more disks than it has redundancy?

this

>be you
>have a pool of infinite drives in a pool
>decide to expand the size of the pool
>forced to buy 3 drives for raidz1
>create a raidz1 vdev for your current pool
>one drive in that new vdev dies
>put in a new drive in the new vdev
>another drive fails

and now you've lost your entire pool, get fucked kiddo

It's like your gas tank. You ostensibly have ~15 gallons but if you run it all the way to zero your engine gets a little iffy.

please leave Sup Forums

Okay, literally what system of aggregated disks would survive that? You just keep citing worst-case disk scenarios. First you assume they're using raidz3, now it's raidz1? And what's this pool of infinite drives bullshit supposed to prove?

If you have x drives with each bit of data stored on 2 drives, and you lose (x/2)+1, you're guaranteed to lose data.

No matter your filesystem, if you have data split between drives and you lose more drives than your data has redundancy you lose data.

NTFS/Windows server just works. Combined with Raid, UPS, and keeping up to date Backups your good in case of failure. Large corps run Windows Server and have far more critical data than the home user keeps and those corporations just keep right on rolling on so if "they" are happy with it then the average home user should be to. Where the average user fucks up is due to "shortcuts". They just install server and let the rest slide. Backups aren't kept or not kept up to date, zero raid redundancy, no ups, or they use used drives which has big question mark as to life span. Then they complain when it fails and blame NTFS/Server. I can configure a working NTFS File server a hell of a lot faster than I can a FreeNas ZFS one mostly due to the clunky interface Freenas uses. If they made that shit actually more streamlined as it is under Server os then they might get more popular.

you can't expand a ZFS pool with one or more drives in the same vdev, which is what everyone wants

the only way to "expand" is to create a mirror (2 drives), raidz1 (3 or more drives), raidz2 (4 or more drives) etc and if you lose one vdev, all of your vdevs get fucked in that pool

why the fuck is it so difficult for you to understand? do you even know what a raidz is? or what a vdev is? or what mirrored pools is?

>I can configure a working NTFS File server a hell of a lot faster than I can a FreeNas ZFS one mostly due to the clunky interface Freenas uses

stop lying, you plug in the drives, create a pool and choose raidz configuration with or without encryption and that's it

why is ability to expand array not good? if i have a 2 disk raid1, and i want to add 2 drives and have it be raid10, why is that bad?

But there's more than just that involved. You have users to configure, Network Shares to create,etc not just the storage part.

>create a user
>assign that user to the dataset or pool

how is this complicated?

Just use RAW you idiots.

Look into HAMMER2.

Attached: 1517856983489.png (1280x824, 1.19M)

>Cute, but it's not *that* useful a feature.
You'll change your mind if you ever actually get a job in tech.

you're comparing Windows servers to FreeNAS. completely irrelevant to the thread's discussion on filesystems

you can just use mdadm for RAID5/6 and do btrfs on top of it anyways
>NTFS is Windows Server
Please tell me you're not actually this dumb. You're doing an OS comparison and not a FS comparison.

Nullfs is love.
Nullfs is life.

Attached: images(4).jpg (236x214, 15K)

I do work in tech and administer servers, but still don't use ZFS. I've considered both it and btrfs, but the programs I use do tons of random-access writes to files and perform best with low I/O latency, so the fragmentation issue is a dealbreaker for me. Just making an LVM snapshot and backing up the relevant data is completely and perfectly fine. Sure, I might waste a bit of bandwidth getting complete copies rather than deltas off-site, but who really cares? Using rsync anyway.

there's not much excuse for a FS to not allow streamlined adding of a "1" level (mostly just RAID0 to RAID10), but everything else is just a mess.

AFAIK though, you can make vdevs in ZFS out of partitions and not complete drives, so you could slice every drive in half, make a RAID10 vdev on half of every drive including the new ones, move shit over, then make another fully-wide vdev on the other halves, then add the the latter vdev to the other one's pool.

Does btrfs have this problem?

>>serializable snapshots/deltas
>Cute, but it's not *that* useful a feature

It's the only reasonable way to get live/near-live backup of your array on the (((cloud))) with any degree of privacy.

no

btrfs allows you to add anything to a pool, let it be one hdd, one ssd, one flash card, one usb etc

I have no use for live backups. The programs I work with work primarily with in-memory data, so what gets pushed to the filesystem isn't very relevant except in complete saves which take some time to produce.

so ZFS is extremely inflexible and has some stupid shit, but everything kinda sorta works
And btrfs is way more flexible, but RAID5/6 may or may not corrupt your shit depending on your kernel?

Go to And btrfs not worth talking about. You might as well store your data in /dev/null

No one cares about your specific use case. It says a lot about your technical ability if you can't see any other use cases than your own.

>implying it wasn't dead at Sun
Sun's the one's that decided to deliberately license it so it couldn't be in mainline kernels, not Oracle.

whoop-dee-doo. only an idiot thinks any one FS can fit all use cases perfectly, and ZFS clearly isn't the ideal backing store for anything resembling a writable DB. why are you even in this thread?

Tox3 and HAMMER2 will deliver us from this suffering.

I'm not using some meme BSD. *BSD as it is is already dying.

>You can get this on literally any filesystem with mdraid and LVM
You literally don't know what you're talking about. ZFS was created for shortcommings in what you're offering as it's alternatives. There is no alternative to ZFS is you want to keep your data bit correct across multiple drives.
Whether that's worth the overhead and complexity of using ZFS over other options is entirely up to you and a matter of opinion. What's not opinion is the technical differences between the two.

What a joke.

>Tox3
what's this

typo.
en.wikipedia.org/wiki/Tux3

>why are you even in this thread?
Because it's about ZFS, and writable DBs are a pretty common usecase?
>random-access files
>"your specific use case"
Yeah, says a lot about your technical ability.

I never denied that ZFS is probably pretty nice on document-store servers. Though I'd still be wary of any design that tries to pull in so many different parts of functionality into one piece of code.

btrfs has online defragging

It is official; Netcraft now confirms: *BSD is dying

One more crippling bombshell hit the already beleaguered *BSD community when IDC confirmed that *BSD market share has dropped yet again, now down to less than a fraction of 1 percent of all servers. Coming close on the heels of a recent Netcraft survey which plainly states that *BSD has lost more market share, this news serves to reinforce what we've known all along. *BSD is collapsing in complete disarray, as fittingly exemplified by failing dead last in the recent Sys Admin comprehensive networking test.

FreeBSD is the most endangered of them all, having lost 93% of its core developers. The sudden and unpleasant departures of long time FreeBSD developers Jordan Hubbard and Mike Smith only serve to underscore the point more clearly. There can no longer be any doubt: FreeBSD is dying.

Let's keep to the facts and look at the numbers.

OpenBSD leader Theo states that there are 7000 users of OpenBSD. How many users of NetBSD are there? Let's see. The number of OpenBSD versus NetBSD posts on Usenet is roughly in ratio of 5 to 1. Therefore there are about 7000/5 = 1400 NetBSD users. BSD/OS posts on Usenet are about half of the volume of NetBSD posts. Therefore there are about 700 users of BSD/OS. A recent article put FreeBSD at about 80 percent of the *BSD market. Therefore there are (7000+1400+700)*4 = 36400 FreeBSD users. This is consistent with the number of FreeBSD Usenet posts.

All major surveys show that *BSD has steadily declined in market share. *BSD is very sick and its long term survival prospects are very dim. If *BSD is to survive at all it will be among OS dilettante dabblers. *BSD continues to decay. Nothing short of a cockeyed miracle could save *BSD from its fate at this point in time. For all practical purposes, *BSD is dead.

Fact: *BSD is dying

I was talking about RAID1/5/6 redundance and snapshots, not checksumming, and you can indeed get those on any other filesystem using mdraid and LVM.

Checksumming is nice, but I've never ever had a bitflip happen on me (at least not one that has had an impact that has been at all measureable).

>*BSD is dying copypastah
Pff,

Well sounds like that's the way to go, as long as you have an up to date kernel.
Thanks!

BcacheFS is our baby but until she's more stable ZFS is battle tested in the enterprise for a decade or more.

BTRFS is way too broken in design.

I've tried live defragging of btrfs on my laptop, and it used a ton of resources in terms of both CPU time and I/O bandwidth. Can't recommend running that all the time.

btrfs has had it's hiccoughs, but its featureset and flexibility is killer, especially for small/home users
one thing that really puts me off zfs is the fact that there's no way to resize zvols outside of replacing all component disks with larger ones
while with btrfs, you can add/remove single disks of any size whenever you want, even online. want to convert a single disk to raid1? just add another disk, want to add 1 disk to your 3-disk raid5? just add another disk. hdd die in your 4-disk raid5, but can't afford to replace it for a while? you can convert it to a 3-disk raid5 for the meantime

In that case, you wouldn't lose the whole pool; you'd be able to bring it up in degraded mode, and it would report which files were damaged.

That's all cool. Anything's better than the ZFS ways of handling it.

just do periodic defragging, like a systemd timer which runs every month or something

>like a systemd timer
lol. Sasuga btrfs users.

And suffer huge I/O latency penalties for hours while it's defragging? Thanks, I'll just keep to XFS.

xfs is a nice fs, but what kills it for me is that it can't be shrunk

>shrunk

you mean compression?

Definitely a flaw, but I also can't say I've ever needed to shrink it.

No, he means resizing the filesystem to a lesser capacity.

>You're doing an OS comparison and not a FS comparison.
did you quote the wrong person?

OK well heres my current situation, and a situation I anticipate encountering in the future.

I currently have 2x 2TB in RAID 1. It is 95% full
I want to expand my file server, and I want RAID 10
The cheapest thing would be to re-use my current 2x 2TB drives and add 2 more, to convert my current RAID 1 into a RAID 10, without having to move the data around myself

Can I do this with mdadm? if not, then it (ext4 with mdadm) wouldn't seem to have much benefit in this regard over ZFS. if it can do this, is this a risky procedure?

and then in the future, I may want to add 2 more drives to convert it to a 6-drive RAID 10