So I'm building a retarded xbox hueg storage server for my own stuff and an FTP server for my friends and family...

So I'm building a retarded xbox hueg storage server for my own stuff and an FTP server for my friends and family, as well as running a storj node for (hopefully) profit

the entire build is basically as follows:

NORCO RPC-4220 case for 24x Seagate 4TB hard drives (the most cost effective drives right now)
Adaptec RAID 72405 to control those 24 drives in a giant ZFS pool
Supermicro X11 (havent decided specific model) + Xeon E3-1220v6 + 64gb of DDR4 ECC ram
Load balancing router with 2x 100/40 fiber links
1x SSD for OS
1x SSD for ZFS cache
Redundant powersupply + UPS

Cant wait for my garage to sound like a wind farm

I was thinking of running this on the latest stable server debian release. Can you autists think of any reason why this distro wouldnt work or another would be suitable?

Other urls found in this thread:

videocardz.com/70266/amd-epyc-7000-series-specs-and-performance-leaked
twitter.com/SFWRedditGifs

no its fine for what you are trying to do. i suggest you use SFTP and SSH instead of FTP

Why so much ram if it's just for storage?

EPYC NDA lifts tomorrow.
It may be worth checking it out;
videocardz.com/70266/amd-epyc-7000-series-specs-and-performance-leaked

I'd suggest glftpd with ssl support. nothing beats the ascii scripts with top leecher and so on

Ive heard ZFS memory requirements are surprisingly high and memory is cheap compared to the +1000 ive already spent on the case and the raid card

Not OP, but ZFS needs ~1GB Ram/TB of storage

Thanks. I thought maybe it was to buffer the FTP uploads or something.

Good input actually I will look into it. I agree basic FTP would not be a clever idea
oh mad. well i would consider it but i really want to go with a supermicro board and who knows how long they will take to make a compatible board (if ever)
theres no way im trusting my data with prosumer motherboards

well if i got 96TB of storage ill probably need to change my board up to allow for that. 128GB of ram it is..
doesnt matter to me anyway the numbers give me a hardon and i can spend the cash

This is a meme you can fuck off m8

OJ are you sure, you want ZFS?

In your case increasing the array size would require you essentially to buy another 24 drives (not sure if there is a scenario you are good with less drives).
Also the CPU is so wasted on a NAS (except if you already have a proper server as virtualization host, and you shouldn't use ZFS in a VM, so).

Also I would go for a used E5v3 or something along the lines for a server.

wait i wasnt aware of this behaviour, is it a problem to later add larger drives piece by piece? would it be better to make multiple pools? i guess i should read up on ZFS more. having it as one big lot of storage is not essential to my needs

looking at the documentation isnt it just as easy as removing a disc from the pool and then adding the new one? am i missing something here?

Yep you'd need more pools then. It's because in an enterprise scenario you don't give a fuck about buying another 100 drives.

Smaller pools would help, obviously, but then you'd have multiple pools.

Was the knockout thing for me about FreeNAS. I'm really happy with my Hyper-V Host though. I just run all drives in a fileserver VM and use Drivebender on those. It also has CRC checking and dedublication does Windows Storage Pool thingy, which isn't actually half bad.

With specs like yours, you basically run so many VMs. (You need fast storage for VMs though, it's the only limiting factor in your case)

replacing is easy
adding more drives is the problem

"some of the caveats that come with ZFS:

Once a device is added to a VDEV, it cannot be removed.
You cannot shrink a zpool, only grow it."

ok so thanks for bringing this to my attention. seems like i will have to think a bit harder about the file system

only grow it, sure, but by adding the same amount of drives.

raid1 * 12

nice build
debian is fine/perfect
don't use unencrypted protocols
manage your certificates well
you may want to look into Owncloud (easy for normies to use + has mobile app)

ZFS L2ARC is a trap unless your dataset does not fit in memory.

does this look like a prosumer motherboard to you?

no but it looks incredibly expensive and unnecessary for my needs

threadripper is the one getting prosumer motherboards, epyc is literally only enterprise grade stuff, but of course they will have 1S boards

ah i see

look into btrfs

In the long run you will save on going with larger drives initially. Less drives overall means less chance for failure, less drives replaced, and more room to expand when they do eventually drop in price.

It sucks having to pay almost 20 to 30 bucks more per terabyte though. I do see your point however
Looks good

You should be able to get 8TB's for around 30/TB. They'd be white label Enterprise HGST's but they're out there.

>
>replacing is easy
>adding more drives is the problem
Not really. You just have to consider beforehand how you set up your vDevs. You don't need another 24 drives to increase space, in fact you can add as many as you want in whatever fashion. Just keep in mind, ZFS was designed to be planned put extensively beforehand.
And fuck these people who say it needs 1GiB/TiB of storage. Not only is that incorrect for deduplication, but ARC doesn't give a fuck about the memory it has available. If you have unused RAM, ZFS will use it. And if you need it for something, the ARC will free up memory accordingly.