ZFS is THE future...?

So, let's sum it all up from the previous thread:

1. ZFS is the most high quality file system if you wish to fully retain your data with no errors whatsoever at all.

2. However, ZFS requires immense monetary investing in hardware to be able utilized 100% properly and fully, with no compromises.

3. In the upcoming years, 2022~2028, top tier full-time ZFS home setup looks like this:
Zen 2/Zen 2+ 16 core (or 32 core?) CPU.
At least 128GB of ECC RAM (DDR4).
All vdevs in the system are only SSD.
Vdev are built in RAID-Z 2/20 modes.
At least 8 SSDs, 8+ (16?) TB drives.
At least one SSD for SLOG.
Bonus (not necessary, but still): one top tier UPS able to hold fully loaded system for 10+ minutes.

Did I get it all right?
This is pretty much THE perfect case scenario for top tier home usage of ZFS, no?
Yes, extremely expensive to build, but I can afford to spend that much on a system if it guarantees 100% full integrity of my data and almost absolute peace of mind while NOT compromising on performance and features any (if I'd just wanted data integrity alone, I'd go tape, naturally, but that shit's slow and inconvenient as hell).

Other urls found in this thread:

techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead
pcper.com/reviews/Storage/Samsung-860-EVO-and-PRO-SATA-SSD-Review-512GB-1TB-and-4TB-Tested/
btrfs.wiki.kernel.org/index.php/RAID56
twitter.com/AnonBabble

>100% full integrity
Impossibru

With ZFS done right, it is.

Bad thread. I would have preferred a Spoony thread.

hp elite 8200 cmt
16gb ram (non-ecc)
m1015 flashed to IT mode
6 x 3tb drives in raidz2 (3 on bottom, 3 on top with cheap plastic 5.25 to 3.5 adapters)
8gb flash drive with freenas installed on it

mucho gusto.

>home usage of ZFS
What the fuck you doing at home that requires top performance for huge amounts of data?
I don't think you need anything more than a few HDDs in RAID5/6 for secure storage and viewing of anime.

Bottleneck is the 10 GBit/s Ethernet. Have fun finding an affordable switch.

...

I just turned 28. That's the same age Spoony was when he started to make videos.

What can I do to make sure I don't end up like him?

I preserve game libraries. In very large quantities. Installation files of PC games (19000+ titles since the very beginning of PC gaming), console and handheld/pocket images, entire arcade dumps and ISOs, and etc. For example, entire game library of all games ever released for PSOne, including all versions of all games in all regions, roughly weighs at somewhere around 5~6TB alone. That's a latest dump of PSOne's entire library. PS 2 is somewhere at 18TB as of this current time. PS 3 has roughly 3500~3600 titles in overall, majority of which weigh at 15~35GB per game in general. Then there's also other platforms.

>Ethernet
I wasn't talking about online/cloud. Just purely a self-sustained offline build for storage purposes.

Dont be a lazy retard who knocks up a random chick who then gets an abortion and smears it all over your face.

1. Yes, almost
2. Not true
3. That's an overkill if you plan on using it as an archive platform. Also one ssd for SLOG in an all-flash array might as well become a bottleneck when you will write your data.

Not to be mean or anything but the fact that you didn't mention your network at all makes me think you probably don't know what you're doing here.
Even if you did have gigabit all throughout your house, all that hardware and electricity would still be a waste in the context of one family using it in a house.
What exactly is your application for all this? If you're doing this for educational reasons then absolutely go for it but you're not going to have any advantage over making multiple CD copies of your backed up data. Generally if your data isn't going to change then there isn't much point in storing it on hard drives.

>one ssd for SLOG in an all-flash array might as well become a bottleneck
Not if it's NVMe or 3DXP.

See . I'm not going to build an online ZFS storage station. It's going to be purely offline.

>you're not going to have any advantage over making multiple CD copies
>what is plastic degradation due to temps, humidity, and disc rot in general

>I wasn't talking about online/cloud. Just purely a self-sustained offline build for storage purposes.

Yes, me neither, because the question is: How do you access your home server? Probably over your home network. And any 1 GBit/s network is already saturated with HDDs in RAID, let alone SSDs in RAID like you proposed.

Probably even 10 GBit/s Ethernet would already be a bottleneck, as a smaller RAID 6 with 3 data SSDs and 2 parity SSDs would already saturate a 10 GBit/s link. 3x 500 MB/s > 1.25 GB/s.

So you are actually waiting for affordable 100 GBit/s networking. Ethernet is damn slow.

>ZFS is the most high quality file system if you wish to fully retain your data with no errors whatsoever at all.
Meme. You won't face any "flipped bit" scenario ever, even without a checksumming fs (like ZFS or Btrfs) unless you're using a old rusty from 25+ years ago
>ZFS requires immense monetary investing in hardware
I don't know where you got such a bullshit, but it's pure bullshit.
>ZFS home setup
already exist and are probably overkill for 99% of datahoarders out there, snapraid better fits them.

>I preserve game libraries. In very large quantities
for such a case (offline backup), just use DVDs. Eventually, add some parity (par2 and/or dvdisaster) and you're golden. No need to create a ZFS array with DVDs, like some autist once attempted to do

The only extraordinary thing I have seen in my experience is that you can do snapshots pretty easily with ZFS. You can even expose these snapshots over SMB in you corporate business network to Windows clients with the "Previous Versions" feature in the Windows explorer.

See pic related.

Ptetty nifty when a colleague showed it to me the first time, but most likely not what OP wants.

Hey fucktard AMD shill, your worthless Ryzen CPUs can't correct more than one bit errors with ECC.

Nobody with a brain will use Ryzen in servers

The reasons you make multiple copies and store them properly. If you're trying to argue practicality here, one storage server and absolutely nothing else is not the actual answer to that. In enterprise situations they do redundant offsite backups in the form of tapes because there's a chance the storage server itself will be destroyed or compromised in some way. Keep in mind that these people also actually know what they're doing yet they still don't trust a solution like OP's on it's own.

A couple minor suggestions for modification:
- SLOG/ZIL drive(s) should be 3DXPoint/Optane, which has absurd throughput at even QD1, and where high cost/size matters very little due to 100GB being more than enough.
- HDDs are good enough for "offline storage", especially if you have a decent SSD L2ARC in front of them. Flash arrays will draw a ton of power (hampering UPS plans) and will absolutely be bottlenecked by anything less than a 40 or 100 Gb network.

you can get a 4 10GBASE-T/12 SFP+ switch from Ubiquiti for $500, or a variety of decent decommed enterprise stuff off eBay for not too much.

>How do you access
Directly to a system.

>I don't know where you got--
Direct quote from wikipedia ZFS article:
>"you need to have 5GB of RAM per each TB of storage space to achieve full potential and not be bottlenecked"
>"you need to have an SSD for SLOG/L2ARC to not bottleneck system during resilvering"

Xeon "Advanced ECC" is ChipKill, not DECTED, you retard.
Literally no mainstream enterprise parts support more than SECDED ECC DDR, although tons of people (both Intel and AMD) have DECTED for L3 and maybe L2.

>just use DVDs
D I S C R O T
I
S
C
R
O
T

>Need to store roughly 400+ TB of game libraries
>Burn them to CDs, Sup Forumsoy!

>SLOG/ZIL drive(s) should be 3DXPoint/Optane
That's exactly what I'm planning to do, or NVMe (probably a 2TB Samsung 960 PRO) at the very least.

>HDDs are good enough for "offline storage"
Too hot, too loud, too power hungry, too unreliable anymore. Also - fuck moving parts in this modern day and age.

>Flash arrays will draw a ton of power
Just stop. One modern enterprise 8TB HDD draws more power than four modern 8TB enterprise SSDs.

you're not getting it.
a big drive is a waste for SLOG. get a 128-256 GB one at first, and if you somehow burn it out in under 5 years, get another small one even cheaper, rather than paying $2k for something worthless.

furthermore, optane drives can saturate their write bandwidth capacity at QD1 with relatively small block sizes (completely crushing all current competiton), which is what, and only what, a SLOG needs to be good at.

The Optane 900p 280GB is by the best current choice for SLOG work.

>One modern enterprise 8TB HDD draws more power than four modern 8TB enterprise SSDs.

What sort of shit drive are you talking about? A 7.2k SAS HDD idles at about 0.5 W/TB, and a decent read-optimized enterprise SSD like the Intel DC P4500 idles at 5W for 4TB. SSD absolutely crushed HDD in power efficiency per active IOPS, but user is talking about some home rig, not serving thumbnails for Facebook.

> Too hot, too loud, too power hungry, too unreliable anymore. Also - fuck moving parts in this modern day and age.

HDDs are definitely noisy, but nobody uses flash for bulk long-term storage, since it's retention reliability is largely unproven, especially (((TLC))).

No one on Sup Forums has a use for such technology. Just use a plain UFS + 2gb backup for neetbux license

All you fags saying theres no home use for ZFS need to grab your balls and pray to become real men. Its not scary, and doesn't need to be as autistic as OP is proposing. It's not so crazy ram intensive as folks are saying. 1GB of ECC ram per 1TB is the golden rule and works great. A nice Z2 is super robust and comfy safe. Plus its just a better filesystem with great features through and through (see: snapshotting VMs? omfg amazing). I've been running ZFS for 6 months on my NAS server (see attached). Never going back desu

>a big drive is a waste for SLOG
I just want to stay on the safe side and I'll be getting a big one simply for future-proofing, just in case. I know that 500GB will be more than enough, but I just want to eliminate as many potential bottlenecks as possible, when it comes down to SLOGing.

>a decent read-optimized enterprise SSD
>Intel
>PCI-e
>decent
0/
At least you tried.

>nobody uses flash for bulk long-term storage
It's about time they start doing this, since this happened recently - techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead .

>it's retention reliability is largely unproven
Modern SSDs, even TLC garbage, easily store data for at least 160 days in a completely unplugged device. For comparison, somewhere in 2010 a highly expensive enterprise class SLC SSD could store info in a completely unpowered device only up to 60 days.

that's an endurance test. even a burned-out SSD will still generally work as a read-only device and (ideally) not lose what it already has very quickly.

the fact that NAND data has a shelf life at all is what scares people off. data on a 30+ y.o. HDD is generally recoverable from a physically undamaged drive, even if a controller board or motor/actuator swap out is needed in rare cases.

> talking about "enterprise" SATA SSDs the whole time

yeah, I finally figured out what kind of clown I was dealing with here too.

Wowowow OP

You want to build a server an attach it to only one (1) client?

As the client probably does not have anything faster than a PCIe SSD to store the grabbed data, why are you even using a 8x RAID of SSDs on the server side?! The client will be the bottleneck.

This is "pipedream, the thread" tier

>the fact that NAND data has a shelf life at all is what scares people off
You DO know that HDDs degrate over time too? Sure, their retention much loner than of current SSDs, but they DO degrade over time nonetheless.

pcper.com/reviews/Storage/Samsung-860-EVO-and-PRO-SATA-SSD-Review-512GB-1TB-and-4TB-Tested/

Educate yourself, lamer.

If you really insist, consider the 375GB DC P4800X Optane. 20.5 PB endurance = 55k complete drive rewrites = 5 years of 24/7 1Gb downloading

I'm not building a server. Just a RAID-Z 2/20'd long-term storage. ZFS's RAID 6/60 is there purely for integrity and retention/parity. It essentially will be no more than two or three vdevs.

> thinks this is "enterprise" quality

you pay for IOPS with power, you imbecile.
workstations don't have enough threaded/async IO to realistically get beyond QD4, but this garbage's peak of 100k IOPS at QD16+ is still a joke.

Side question: What is your backup strategy in case ransomware encrypts the files on your storage?

As we all know, RAID is for availability, not for backup. So OP, what is your plan in this regard? Just whip it?

>not foss
Without me.
Still on btrfs but looks like it's going to be aborted

If you are using btrfs with RAID, better hope your data isn't getting aborted

btrfs.wiki.kernel.org/index.php/RAID56

>The parity RAID code has multiple serious data-loss bugs in it. It should not be used for anything other than testing purposes.

topkek

If you attach a NVMe to SATA bus then it's not gonna give you much gain over a standard SSD.
If you access it through RDMA or Infiniband then it will definitely be better.

Bit flips happen in RAM and ZFS can't do shit about it, ECC RAM is needed to prevent that

Best thing is to tier up your storage: HDD for cold data, SSD for cache and SLOG

>in case ransomware
I use CS 2018, don't worry.

No raid, i use it on a laptop. No data loss would fuck me

lmfao no thanks.

I'll stick to btrfs with snapshots and rsync.

>If you attach a NVMe to SATA
You clearly don't know what NVMe is.
You don't attach NVMe to SATA. These are two different bandwidths. You attach NVMe to PCI-e, M.2, or U.2, not to SATA. You can run device attached to M.2 or U.2 in a SATA mode, stealing SATA bandwidth from SATA ports, but you don't NVMe on SATA. Basically, M.2 and U.2 are versatile ports which allow you to do either NVMe or SATA. NVMe is not a port.

OP confirmed for not having a backup strategy

Backuplet confirmed

>Bit flips happen in RAM and ZFS can't do shit about it, ECC RAM is needed to prevent that
ZoL has software ECC, you can enable it with a module flag. I forgot it's name but it's there and useful.

>2022~2028
>DDR4

>Bit flips happen in RAM and ZFS can't do shit about it
It can, but it heavily depends on number of bits flipped, as errors arise in geometrical progression. Just 1 flipped bit has chance of producing errors anywhere from 0% to 3.6%, 2 bits flipped increase those 3.6% twofold, 3 bits flipped is two times higher error chance than 2 bits. ZFS can theoretically heal/repair this, but yes - inclusion of ECC RAM basically automatically negates the very possibility of this happening (at least on "1 flipped bit" level), producing an additional error-protection layer essentially. And you need to have roughly 5GB of RAM per every 1TB of storage to not be bottlenecked on performance, so a quality ZFS requires A LOT of ECC RAM. 1TB = 5GB RAM, so 4TB = 20GB RAM, 8TB = 40GB RAM, 16TB = 80GB RAM. Sure, you can try to cheap out on amout of memory, but just remember that a greedy cheap-skating scrooge always pays twice in the end.

ZFS has best snapshot system out there, you tripfagging retard.
It also has best syncing and other features. Btrfs is literally not better at all than ZFS.

Using ZFS at all is a backup strategy in itself, you retard. Completely offline, so how in the flying fuck would a ransomware get onto that system, huh? Buy yourself a brain.

Found it. Use it, you don't really need ECC if you use this flag, but the performance probably takes a hit.

ZFS_DEBUG_MODIFY (zfs_flags=0x10).

Exactly, but it doesn't work 100% perfectly, so you still need to have ECC after all just to mitigate a writing hole and to be on the safer side. Adding ECC just applies another layer of protection against possible errors, is all. Absolute integrity.

I do the same and ps2 is not that fucking big, I have a complete N/A collection and it's like 2tb

>best
Lets see how fast your zfs snapshot system is faggot.

Quoting a tripfag, kys faggot.

It is really the question why people like OP build these retarded systems and do not even think about backups

>RAID is a backup in itself

OP, don't embarrass yourself

>Completely offline

You said here that one other system is directly attached. Is also that one offline? Do you not attach any external, potentially infected media like USB sticks or game disks?

This is not an excuse for no backups. Enjoy your data loss.

>Bit flips happen in RAM
Therefore, irrelevant for the purpose to "fully retain your data".
Btrfs is broken for any non-hobbyist purpose and it's depreciated.
What you cite is irrelevant and isn't even in Wikipedia's ZFS article. OP isn't concerned about resilvering bottlenecks and isn't going to build a Ceph-alike cluster.

You probably aren't even using ZFS, can't even post how fast your superior snapshot system is.

Maybe its just really really slow and you are too embarrassed to show it?

>PS 2 is not that big
>I have a complete N/A collection
Come back when you literally collect each and every regional release and each and every re-release ever released out there, kid. Including latest homebrews and etc. N/A is fucking kindergartner tier, literal breeze-through baby steps.

You are a shitty RPer, a total complete global ps2 collection is gonna be under 8tbs

How does EXT4 compare?

ZFS makes a snapshot in less than 1 second even on a slow-ass 5400 RPM HDD, you dumb fuck. And unlike pretty much any other file system out there, ZFS's snaphosts can actually be written to. I DARE you to try and make a not-read only snapshot on anything other than ZFS.

Also
>wee/a/boo
Off yourself.

>What is RAID 1, what is RAID 5, what is RAID 6, what are RAIDs 50 and 60

>You said that one other system is directly attached
No. I literally meant direct work on the system where ZFS is. It's literally a single build.

The snapshot feature is pretty neat. In theory you could turn any invasive system change into a transaction that can be rolled back in case of error.

You are lying and a hypocrite.

Also I already did but you wouldn't know that because you are a moron.

>isn't even in Wikipedia's ZFS article
Get fucked, shitposting lamer.

BTRFS has copy on write by default and it doesn't eat up ram like that lol.

If you are a desktop/laptop user you have zero need for ZFS, it will only eat a lot of ram and cpu, there are tons of tools which allow for easy snapshotting.

For enterprise use it's certainly appealing, only real downside is that it can't be included in the Linux kernel due to the intentional license incompability Sun made sure of.

What happened to spoony? Fill me in

>a total complete global PS 2 collection is gonna be under 8TB
It's not. That's the size of PSOne's entire library.
I probably should've mentioned that I'm not counting size in archived/ECM'ed images, but pure raw ISOs (.ccd+.img+.sub combos in case of PSOne, Saturn, and similar). And the way how I personally store this, is two-fold: a raw game image/installer/dump AND a separate archived (usually 7-Zip) copy of it with chemsum tables. Basically this means that one 700MB disc will be 1.4GB on MY library, as I make an additional copy of each and every individual dump and I don't compress my archives (just make "storing containers" for files, essentially).

Best strategy is to “zfs send” encrypted snapshot deltas to an off-site backup location.

Even simple FS snapshots will make you immune from NFS/whatever client side ransomware attacks though.

It's not bad at all, but ZFS much better if you need purely a completely protected storage system. Many NAS manufacturers include ZFS as their file system in their embedded OSes these days.

So you are inflating your numbers for no reason, gotcha.

>Many NAS manufacturers include ZFS as their file system in their embedded OSes these days.

Cursory glance of the available products online says otherwise. Predominately they are Windows or Linux. Is being a disingenuous neet BSD loving faggot a full time job for you?

>You are lying
Get fucked, you dumb onemoo lamer:

RAID is for maintaining uptime after a device outage and is not considered backup.

If a single failure incident like a flakey drive controller, lightning strike, etc. can destroy all copies of data, it’s not backup.

Some people don’t even consider same-site data copies as real backup.

ZFS has best impementation of copy-on-write and it requires much RAM for all the good reasons. Your butterfacefs is a fucking glitchy buggy unstable trash for kiddie autists, on the other hand. ZFS is for mature adult gentlemen. Come back when you grow up, boy.

There is OpenZFS, so ZFS been FOSS for quite a while now. It's all just Stallman's and Linus' elitism idiocy.

If its slower and takes more ram it isn't better.

In other news, 1+1=2

He's alive, but heavily fucked to the head.
Paranoiac as hell and can't take any, even just ever-oh-so-slightest critique at all. He basically turned into full-blown Chris-chan, but much worse. At least Chris-chan was a relatively harmless down syndrome manchild, Noah is just a big fuckhueg elitist asshole on the other hand.

No, just applying additional layer of error mitigation, as raw files tend to become broken much easier than ones put in a zipped archive container. I just apply additional measures against data corruption as I need absolute perfect state of files at all times, is all. If I ever lose a raw game image file, I could always just unzip it from nearby archived copy and keep on storing it, with no need to download or dump the said file again. And if I were to store only archived files, then I'd had to unzip each and every file each and every time I needed to use it (in the case if an emulator or a system doesn't allow loading directly from archived container, that is), which is just retarded and a pain in the ass.

>There is OpenZFS, so ZFS been FOSS for quite a while now. It's all just Stallman's and Linus' elitism idiocy.

No you idiot, both Stallman and Linus would love to have ZFS in the kernel, however back when Sun was DYING trying to compete with Linux, they started to desperately open source their tech.

However, they of course realized that if they licensed stuff like ZFS, DTrace in a way that allowed inclusion into the Linux kernel, they would lose the only technical advantage they had, so they crafted a NEW license (CDDL) which was GPLv2 and thus Linux incompatible.

It didn't help, they still lost to Linux and was bought up by Oracle, who in turn is deprecating everything Solaris because they bought Sun in order to sue Google over Java.

ZFS is everywhere, you cretin. Windows has ZFS as a supported file system, so does Linux. Go Google up OpenZFS, you stupid uneducated inexperienced cocksucker.

Just store archived files in 3 diffrent machines in 3 diffrent physical locations. Most emulators can load compressed files directly and the few that can't jsut unzip as needed

Ah ah ah, it's all alleged, anyone of those fucks from TGWTG could have been the father.
Still, seeing him fallen breaks my heart a bit. If he just stoped constantly seing himself as a victim.

It's not slower in any way, you debile.

>disingenuity intensifies

Still waiting for that screenshot of you creating and deleting a snapshot.

It only takes seconds right?

XD