Servers

Sup Forums, my goal is to build a proper server. However I'm starting to realize this is going to be insanely expensive.

For starters, I'm deciding between what HDD's I want when I came across this.

enterprisestorageforum.com/storage-technology/sas-vs.-sata-5.html

Basically, using SATA(consumer drives) for RAID is a terrible idea, since it'll have a 100% failure rate after rebuilding the array at 10TB. This means I'll have to buy expensive enterprise SAS drives. SAS drives cost around $200, and considering I need 30 drives, that's $6,000.

Next is the RAID controller. These are expensive too for SAS interface(3Ware, Areca, Intel etc)
Then I need to build an actual server.
32GB of ECC DDR4 ram will run me about $400
Intel Xeon around $300(Unsure of exact amount of cores/ghz needed)
Server Mobo which supports ECC $150~300
Rack cabinet(10U minimum)
Rackmount

Is this info right, or am I reading something wrong? Does a good site/guide exists that goes in depth about building servers?(I don't consider myself a consumer with the amount of data I have)

Other urls found in this thread:

youtube.com/watch?v=MyK7ZF-svMk
youtube.com/watch?v=Iz52pi1Y4_o
synology.com/en-us/products/DS1517
backblaze.com/blog/hard-drive-failure-rates-q1-2017/
enterprisestorageforum.com/storage-technology/sas-vs.-sata-5.html
twitter.com/NSFWRedditGif

that picture fucking top kek

A HP C7000 blade chassis weighs nearly 450lbs fully configured that fucking rack would sink through the floor

please lend me your server expertise

What do you need a proper server for?

having 30TB+ of data isn't really going to cut it with consumer grade equipment.

I need fault tolerance.(RAID10)
I need data integrity.
I need an efficient file system to handle such data.(ZFS)
HDD's need to run 24/360, low energy consumption.

Don't plan on actually putting it on the internet. It'll be local network only.

Recommended vids for you, OP

youtube.com/watch?v=MyK7ZF-svMk

youtube.com/watch?v=Iz52pi1Y4_o

thanks

ZFS is for JBOD. Strongly advised that you don't write ZFS to any RAID arrays.

That is to say, create your raid10 using ZFS.

30TB is nothing just buy a bunch of 8TB Seagates and use the ZFS software RAID 5 or 6.

Reported to the FBI.

You do realize that's going to be 90TB when I'm done right?

RAID has to be 60TB if RAID10, 30TB usable space. that raid's usable space will be occupied by my current 30TB set.

Then I'll have 30TB JBOD remaining.

What should I use?

upload it to google drive

Also using large chunks of 8TB in a RAID is a terrible idea if I recall.

rebuilding is going to be a nightmare, and those large drives tend to be much slower. Also, the higher the capacity, the more likely it'll fail.

i'm not paying $300 month for my storage.

Based on your specs, that's the professional grade equivalent of an entry to mid level SAN. Dell Compellent or Equallogic, Nimble, etc.

Since what you are trying to build is a storage array with a high degree of reliability I would not at all recommend cobbling it together yourself, but, if that's what ya gotta do.

>I would not at all recommend cobbling it together yourself
Is it not the norm for Sup Forumsfolk to build their own enterprise tier servers?

You *could* casually call it a software RAID10, but in ZFS, it's actually called a "striped mirror" vdev zpool. You create a bunch of mirrored vdev pairs, and and stripe a zpool across all mirrors.

>What should I use?
Use a SAS Host Bus Adapter that only functions in JBOD mode. In order for ZFS to work properly it needs to see the drives directly without any interference from hardware RAID.

Yeah, maybe, but it doesn't sound like a very interesting project/hobby. Enterprises certainly don't build their own, for the most part. It gets boring very quickly.

Forget zfs

synology.com/en-us/products/DS1517

Keep it easy, your needs dont seem high enough to warrant all that shit

> Basically, using SATA(consumer drives) for RAID is a terrible idea, since it'll have a 100% failure rate after rebuilding the array at 10TB.
That is just retarded.

> Next is the RAID controller.
Not really, no. Software raid with Linux mdadm or such is often used in real servers.

Or software cloud replication for server clusters - they generally don't do RAID either.

> Rackmount
Helpful if you don't want just one server but more. Otherwise, various regular cases are also actually decent in terms of maintenance access these days.

> Intel Xeon
Not sure you need that either. Some of them are good CPU for power efficiency, but so are others.

>$300/mo
That's cheap compared to the project you're about to undertake. There's really no way to do this cheaply.

Also raid is not a backup. I cannot stress this enough. Raid and zfs are availability tools and storage abstractions, they do not make you invulnerable to data loss. At the storage capacity you're talking, lto 7 tape actually starts to make sense for backing up. Just do something. Don't rely on your primary storage being reliable forever.

Build their own server - yes. And it's a good choice in many situations.

Enterprise tier - no. Few in Sup Forums have entire 20U+ racks filled with storage and processing power, or even "clouds" of such racks.

>That is just retarded.
Most consumer drives aren't built to be thrown in raid arrays, SATA drives are a liability beyond a certain size/count point in server environments.
SAS drives are inherently better at protecting itself from critical hard errors and silent corruption. Using SATA drives would run OP the risk of killing his ZFS array(s).

>Otherwise, various regular cases are also actually decent in terms of maintenance access these days.
OP stated he's going to be running 30+ drives, there is no way in hell a regular case will properly cool those.

>Not sure you need that either. Some of them are good CPU for power efficiency, but so are others
He will need the Xeons for ECC memory, which is basically a requirement for software raid if you care about data integrity.

> SATA drives are a liability beyond a certain size/count point in server environments.
Lel no. Where the fuck did you hear that BS?

Apart from being used in the really "big data" storage clouds all over, you'll also see that one of the few companies that publishes more exact statistics regularly has a lot of them:
backblaze.com/blog/hard-drive-failure-rates-q1-2017/

That is definitely a bigger deployment by far than OP is talking about, so even if there were such limits (that don't actually exist) they're clearly higher than OP ever will go.

> OP stated he's going to be running 30+ drives, there is no way in hell a regular case will properly cool those.
Many cases completely adequately ventilate drive bays. No problem.

> He will need the Xeons for ECC memory, which is basically a requirement for software raid if you care about data integrity.
It's a possible simplification, but not a requirement for shit.

When you get checksum errors, you can just test your RAM to figure out if you've regularly got a stuck / flipping bit or such.

Also, other non-Xeon processors too support ECC RAM.

enterprisestorageforum.com/storage-technology/sas-vs.-sata-5.html

Again, backblaze.com/blog/hard-drive-failure-rates-q1-2017/

They operate 80k+ drives and it works, so what is the argument?

>have shit on ext4 partitions
>keep finding shit in "lost+found" folders
wtf. I thought ext4 was supposed to be good. why is my shit getting "lost and found"

rm -rf

You forgot to say sudo and no preserve root.
newfag/10