He hoards data

>He hoards data
>Doesn't use ZFS
You have 30 seconds to explain yourself.

Other urls found in this thread:

xcase.co.uk/
twitter.com/SFWRedditGifs

>his filesystem needs 6GB of RAM

kys feg

>He has idle RAM

Meme filesystem, ext4 can handle a lot more nested directories than zfs

>Not using NTFS
What are you, gay?

how do I add a drive to an array?

Because of their absurd requirements for disk redundancy (aka RAID).

It is regarded as 'bad practice' using any sort of RAID and they recommend simply having double the disks you need and store two copies of your data.
If you try to use any sort of RAID prepare for absurdly low performance and when a single of your disks fail it takes literally WEEKS of time to rebuild it (and good luck to you if you expect to use the array in the meantime).
And there is the worst one: Inability to expand RAID arrays. I want to set fault tolerance to X drives and when I plug into a new one just use it as normal, but ZFS says fuck you.
You NEED to have all the drives you will ever want to have in the array on hand at time of creation and if you ever need more space just build another array and waste away storage space lol.

It may make sense if you have tons of money and can afford to throw away half of your storage space, but it has insane requirements for home usage.

bcachefs and tfs are the future

>NTFS

Christ what a crybaby. Protip: redundancy and reliability uses more capacity.

I need only one copy of anything I can just grab again. Other filesystems are good enough, which is good enough.

This is what the people who have never experienced a failed Hdd think.

I don't know anything about them. Will I be able to just set fault tolerance to an X number off drives and add in HDs as needed and have the filesystem expand as expected?
And preferably don't need absurd amount of RAM and cpu power for a simple filesystem?
If so I'm anxious for the future.

with bcachefs you'll be able to do what you'd like, and have tiering up to like 16 tiers. You can have higher tiers cache the lower tiers, and replicate the metadata on them as well. You can setup striping/parity however you like on any tier, and remove and add devices while online. You can throw in an ssd and set it up to cache the lower tiers, as either a writeback or read cache, or both. pretty much anything goes

And nah, zfs is only that way because they rushed it out the door for servers with a ton of ecc.

I'm looking forward to tfs's machine learning cache system as well

btrfs RAID56 is getting fixed in kernel 4.12

RAID5/RAIDZ1 failure is overhyped, it's perfectly reasonable to use it with 4 disk vdevs. Expanding your storage 4 disks at a time isn't that unreasonable for the benefits of ZFS, and you only lose 25% to parity. Considering that it's a home setup, the downtime from having to restore from backup in the extremely unlikely case of a drive failure during rebuild is a non-issue to begin with. Also, a ZFS resilver takes considerably less time than a traditional RAID array rebuild.

However BTRFS will clearly be superior once RAID56 will get implemented properly, since it offers easily expandable arrays.

Assuming 4TB as the least reasonable HD size to buy now, that means spending ~500 usd at a time to expand by 12TB at a time.
That is several years worth of data to me, and when converted to local price about means 2 months full salary.
Also over time you get progressively worse efficiency and failure rates since it is enough to two disks in the same vdev to become corrupt for you to loose all your data, and you have a fixed 25% storage loss.
If ZFS handled expansion like BRTFS want to, it would mean a fixed drive failure tolerance without depending on luck so that drives in the same vdev wont die at the same time and much better efficiency as you would only loose a few drives worth of storage space.

I'm gonna wait a few more years until BRTFS gets its shit together or a better FS comes along. Until them I'm just gonna keep good old regular offline backups and deal with bit rot.

pretty much the only way to do piecemeal expansion with ZFS on home-user budgets is to settle for mirrors, and drop in another two-drive vdev when you need to expand. The space efficiency sucks but at least scrubs and resilvers are fast.

>I'm gonna wait a few more years until BRTFS gets its shit together
I've been thinking this literally since before btrfs became stable and before it was merged into the kernel. It was supposed to be our savior but ended up being rushed and designed poorly with code piled ontop of piles of code. It was developed without any care about correctness and more about new features

that's why I was saying bcachefs. it's what btrfs should have been and it's the reason the developer is working on it as well. I'm running raid 10 on btrfs but I plan on moving to something like an ssd cached raid 5+0

AppleFS doesn't have these problems.

I'm kind of excited about bcachefs now, been reading about it for about an hour now.
While I don't really care much about the cache part of it it seems like a well designed system with modern features and developed with code quality in mind.
I just hope it turns out like brtfs should have, but i seems like it is still a couple of years away, lets see how it turn out...

the name kinda suggests to me that they care a lot more about the caching/tiering functionality than the fault-tolerance stuff. We'll see how it shapes up over time I guess.

I don't like the name either, he just took his old bcache name and slapped fs on it. I guess it works for riding off his previous work but maybe he'll rebrand before it gets merged

I get my 4TB drives at 80 bucks a pop.

Ebay ya dingus.

>deal with bit rot.
It's fucking hard, man.

Even my backup rots. I only know because I use it as a zpool.
At least I can tell what's taken damage and weather to pull from live or backup instead of flying dark and waiting for a bitflip to slice something up.

Which drives and where? I got mine for 130, but it is a nice 5200 rpm (lower noise and power usage) HGST enterprise drive which blackblaze tested to be their single most reliable drive with over 3k testing samples.

I never really used ZFS, so how does it work on a single drive? It has no fault tolerance, but it still checksums data and warns you at read time if there has been corruption?

Yeah, I won't disagree there, it's also why I haven't moved to ZFS yet. And the fact that I live in a shithole country while electronics cost more than in the US. Though expanding with 4 disks at a time is still pretty reasonable to me - I could afford that once every 1-2 years, and keep up with my storage needs. It's a bit riskier to keep adding vdevs, but two complete disk failures in the same one is just too rare. An URE might still happen, but that'll only fuck up a block instead of the whole pool, so you'll only need to restore a few files.

I'm just very hesitant to commit to it, especially that other solutions are starting to surface. Other than BTRFS, Stablebit Drivepool promises integration with ReFS Integrity streams, and they also plan to develop their own bitrot protection software.

>4TB drives at 80 bucks
ok m8

Yep, on a per file basis.

If you have a single vdev zpool and a backup, you're much safer than using even traditional RAID.
You just have to put in a little elbow grease when something's checksum if not right.

ZFS makes backups easy with differential snapshots though.

I run a RAIDz1. WD blues & greens @ 5400 RPM work for me. If one dies it will get replaced but slow spinning drives rarely fuck up unless you physically abuse them.

Not worth the extra cost for slightly better manufacturing standards in my opinion, when you can just replace the buggers.

What for? I have ext4 and will upgrade to a RAID configuration in the future, too costly right now.

If you just want to detect bitrot, ReFS is a far simpler solution (assuming you haven't moved your data to a linux server already).

It can also repair bitrot if you use it alongside storage spaces, but apparently performance is still horrible.

Not dealing with proprietary software, specially on something like that.
Might use ZFS on my backup drive though, never gave it much thought really. Just assumed it wouldn't fit that use case.
Maybe an script to checksum all files locally and on the backup, but ZFS might just be simpler, thanks for the idea.

Might want to look into Snapraid too, which is open-source. It offers almost the same same protection as ZFS, just not in real-time.

mdadm RAID is the production ready solution now, and we are already looking at cloud type distributed FS being the next usable stable best choice thing (because they'll manage both local and remote storage with all concerns covered eventually, including data integrity, redundancy, deduplication, encryption, efficient usage of hardware that can be added and removed any time with no effort, ...)

It is great as far as RAID goes, but it does not protect against bit rot.

>You just have to put in a little elbow grease when something's checksum if not right.
if I remember correctly, if ZFS finds a checksum error and can't fix it due to lack of redundancy, it will tell you what files are affected. Restore from backups and you can clear the error, it won't kill the whole pool like a failed vdev will.

I think you can "scrub" it, but because it adheres to the "no layering violations" philosophy, all it can tell you is that disk block X is inconsistent. It can't rebuild from redundancy (no checksums) or tell you what file you need to restore (because it doesn't know anything about the filesystem above it)

B-but user, I use tempfs, the only filesystem that respects my privacy.

bro you dont even know

600 bucks buys you a lot of ZFS

theres no excuse for anyone

If you can tell there has been data corruption but can't correct it or tell what it affects it is of no use really.
Gonna look further into ZFS on a external drive, helps me get some experience if I ever use it as my main FS and helps against bit rot (which ATM I have no protection whatsoever).

My home server has this horrible setup of 6x500GB doing RAID through mdadm, LUKS, and ext4.

It works, but it's disgusting.

But I cannot be bothered to fix it.

this is very nice cabling

also what case?

Thank you.

It's called: "X-Case RM 400/10 Short V5 500mm 4u ATX Rackmount

I found it at xcase.co.uk/ but it doesn't seem to be there anymore.

I ask because you did a whole hell of a lot better than I was able to. I want to neaten it up but several attempts at that have gotten nowhere. Your rack overall looks nicer too. One boot drive, one 1TB scratch disk, and 12x3TB drives in six mirror vdevs. It's in a cheap Rosewill case from Newegg.

I have to open it up to replace a disk later this week, maybe this time I can make it less awful.

The key is to do the data cable management before you do all the other ones, and then install the power cables at the very end.

Also, use zip ties to bundle sata cables together. They are flat so the combine very wel together.

Once you get the done you add the power cables and ruin the whole look with them.

> 2017
> Not using JBOD & multiple google drive unlimited accounts as a backup

Why didn't you align the cooling fins of the CPU cooler with the direction of the case airflow?

Isn't it 1Gb RAM per TB?

ZFS supports dedup and compression.

If you're gonna troll, make it halfway believable.

Don't be a fucking faggot pls, example 1 of my accounts , using rclone encrypt & decrypt the data, and then mounting gdrive as a volume with fuse on my server

You can buy unlimited accounts on ebay for 15$ fag

2 Accounts mounted as vaults

Well it was a long time ago I put that heatsink on (it's a Nehalem and used to be in my desktop until it got bumped to the server when I upgraded last year), but if I remember correctly, it's because I couldn't due to the peculiarities of the mounting bracket and/or the heatpipes on the right side fouling either the PSU or the chipset heatsink.

Yeah it'd be better to have some 92mm tower cooler that works with rack airflow instead of that old blowdown sink, but various combinations of laziness and stinginess keep it there. This is helped by the fact that it works okay as-is, CPU temps under load only get to 60C or so.

I use CEPH instead.

Get on my level,

F A G G O T
A G G O T F
G G O T F A
G O T F A G
O T F A G G
T F A G G O

>no trigger discipline

>hoarding data
0 purpose

I seriously hope you have ECC memory.

Where does these accounts comes from ? Some university research center ?

I don't have ECC RAM.
Nor do I have the money to purchase Software RAID cards and Enclosures enough for me to justify the initial investment.


That and I have had Loads of problems with fucking BITROT on ZFS.

you made me laugh

ill tell you a joke:

NTFS

>ZFS
>bitrot

here I am thinking about how I need to replace my dinky NAS which is basically EOL and we have a storage thread on Sup Forums.

desu I'm leaning towards just using LVM

>ceph

Me too, I'm thinking about centralizing my storage (moving most hard drives out of my desktops except boot drive and plopping them in my home server) and honestly I think I'll just go with XFS on LVM, especially since I'm not gonna run a *BSD based server.

is bitrot a real fucking thing or am i being baited

yes ok entropy and quantum tunneling slowly makes electric charge and magnetic polarity go away but seriously? why have i never heard of this before in my life

yea my desktop / linux server just have small-ish SSDs in them as main boot disks, all my media and other shit is dumped onto my NAS.

I've used ZFS before and I don't know if I want to deal with the extra config / management overhead of that vs just LVM
I gotta build a new linux box either way I guess. I didn't plan well when I built my current linux server - it's pretty small resource-wise and there's no room to upgrade, basically got to build a whole new box

You don't need ECC, you will likely never experience bitrot. If you are, your hardware is likely defective in some way, only other explanation is you somehow live in the cosmic ray party room of the Earth.

Xfs is faster

All memeing aside does ZFS have any noticeable improvement over NTFS in SSDs that people should care about?

ECC isn't really for fixing random bitflips from cosmic rays, it's for detecting and compensating for weak capacitors or bitline sense amps that fuck up repeatedly, but only every couple hours or days.

Meh, the whole bitrot, raid write hole,data corruption thing has gotten really out of hand. Yes if your server loses power while writing data, some data will get lost,damaged,etc. Just the nature of the beast. The fact it's in a raid config has nothing to do with it. The only way to prevent this is to use a UPS that will in the event of a power failure shut down the server properly before the ups battery dies. It's good practice to use a ups anyway no mater if you use raid or not. Drives die, that's what backups are for. Files get deleted by mistake, that's what shadow copies (previous versions) is designed for so you can go back to yesterday or a month ago and retrieve those files without having to touch your main backup. If your whole server just dies well that's what a system image + full data backup is for. Will you lose data at some point, oh yes. But with a good plan maybe at most a days worth, depending on how often you run a backup job.

To ensure maximum data safety keep your backup device (s), NAS,External usb, whatever shutdown when not in use. That way when you do lose power they won't be affected. plus the drives will last longer.

>not using murderFS

> waste of trips on shit advice

unless it's permanently offsite (and preferably continuous/live) and thus not arbitrarily powered down, it's not really backup.

RAiD6 with a XFS filesystem works just fine.

I have a nonredundant array of expensive disks. Haven't lost anything yet.
>tfw non-production and can take your time making sure a hard drive isn't a lemon

You only need that much RAM if you're going to be using deduplication in FreeNAS, a lot less is fine. At worst it'll perform worse.

You can always just go Nas4Free instead, for a lot less bullshit and simpler setup.

>ceph

Yes, CEPH. It's master race.

Unused Ram is wasted Ram

>I'm gonna wait a few more years until BRTFS gets its shit together
You sure about that m8? I've been thinking that for like 10 fucking years now.

Unspent money is wasted money.

Why not EXT4 instead of XFS? I heard XFS performs worse in most cases and XFS really is only suitable for extremely large operation (talking petabytes here). Both should be plenty stable, but I heard EXT4 has better support for recovery when things go bad.
Also good luck with bit rot. Unless you just want to save money (or are worried about uptime) I'd personally buy a few more HDs and use them as a regular offline backup. More reliable by any measure, specially since you aren't protecting yourself against bit rot anyway.

You don't have 20 terabytes of weeabo horseshit sitting around.

Or bcachefs then. God, we should have a better solution by now.
If in about 3 years things don't improve I'm probably going to give up and use ZFS. Or maybe give up on my obsessive hoarding and just keep using regular backups like everyone else...

This is very true.