NAS building

So I finally decided to build myself a nas, my current plan is raid5 with 4x1tb harddrives. Does anybody have experience what motherboard I should go for? I'm going to use an old pc case that was equipped with a mini atx board, but it's about 10 years old and broken.

Any suggestions on the build?

Other urls found in this thread:

jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
twitter.com/SFWRedditVideos

What OS are you going to use, Freenas?

I was considering making a post. I am need to replace my current raid setup.

I want to have at least 8tb, but only have room for 3hds. Maybe I can get a 4th in there.


Anyway, is HGST the only real choice? Everything else seems to have utterly high infant death.

Are you putting this machine in Antarctica where you must fly by plane when 3 out of 4 drives are bad? You don't even need RAID unless you can't have any downtime.

Well you wouldn't be able to make a mirrored raid with 3 so just do raid 5 as with OP.

In my old job we used seagate sometimes with no complaints.
I tend to only use WD drives myself though yeah.

What if he has data he doesn't wish to lose?

I didn't hear too good stuff about raid 5, looks like there's a double digit percentage of losing the array while rebuilding

A lot of cheap routers have usb 3.0 ports and auto NAS options.

I am using raid0. I don't need redudnency

Ah of course, I tend to assume when people just say raid they mean raid 1 or above.

>my current plan is raid5 with 4x1tb harddrives
Unless you already have the drives go for bigger ones.
Like I am talking 5tb+ drives, you are just limiting your expansion capabilities by going 1tb, also they are more expensive per gigabyte.

Probably debian

No idea, this is my first time caring about harddrives ever. I have a few laying around and will replace them when they're dying.

It has to be replaced when 1 drive fails, actually. I just want to make sure that none of my data gets lost. Ever.

It should be fine with 1tb drives, since that percentage rises with higher capacity

Mine doesn't, and I want to stay flexible to expand on the future

I have 3 and want to buy one more.

>It has to be replaced when 1 drive fails, actually. I just want to make sure that none of my data gets lost. Ever.
Probably shouldn't use RAID for backups then.

>RAID doesn't help against drive failure
this is you

It's not a backup solution. It never was. The sooner you realize this the easier your life will be.

>Well you wouldn't be able to make a mirrored raid with 3 so just do raid 5 as with OP.

Do keep in mind, 8TB netto in Raid 5 means having 3 4TB HDD
Rebuild times on those are a PITA

But I don't want to transfer 2tb+ on dvd and keep that forever somewhere, together with the drive

This is why rsync exists

>drive failure
>drive failure
>drive failure

reading comprehension much? if he loses his data because of virus/accidental deletion/etc it's not RAIDs fault

>deleted files can easily be recovered in RAID
(You)

> raid5 with 4x1tb harddrives
Don't do it. If one drive fails, other three drives may be close to failure, so intense reading during a rebuilding (~2,5 hours) may kill an array. I'd suggest RAID 6, but if you are going to have a 4 drives array, do a RAID 10.

RAID is protection against:
>drive failure
>deleted files

pick one and only one

>I just want to make sure that none of my data gets lost. Ever.

reading comprehension much?

>if he loses his data because of virus/accidental deletion/etc it's not RAIDs fault

If he loses his data because RAID fucks up, that will be RAID's fault.

>corrupted files can be recovered
(You)

then tell us, o' wise man, how would you protect data against drive failure without RAID
and one more thing, fuck RAID 5

...

What is the best 8TB HD to buy, and why is it HGST?

I'm already using Syncthing for documents etc, but this is for archiving a fuckton of data

this is just software level raid 1

and you lose the advantage of higher read speeds

I'm sort of in the same boat. I'm thinking of using a two 4tb drive RAID system but don't like the idea of having to leave it on all the time, however I do like the ability to stream shit. On the other hand I don't do regular back-ups since I don't download stuff often enough to justify it.

Am thinking of having the two 4tb drives in my current system (one main back-up, other secondary) and also having a back-up on an external.

If you want to use RAID for up time, this is fine. It will never replace backups. That's not what it's for. I will say it until I'm blue in the face and Sup Forums can make fun of me until they find corruption across multiple drives. It happens.

Nope. Try again user.

No one here has said it's a replacement for a backup. You can use RAID and still keep a backup. It's what I do with my NAS.

Let's do a cost analyst on this example
>4x 1TB NAS-optimized HDDs
That's about $70 per drive, so a total of $280
>RAID5
You're getting a total of 3TB of actual capacity, so that's $93.33 per TB of storage
You'll also need a motherboard supports RAID5 from the chipset, so that's a high-end H77, H87, H97, H170, or H270 motherboard that you'll need to invest in (minimum of $100 for an mATX model)
OR
A cheap $40 motherboard and a PCIe RAID controller card (that's at least $100 for a shitty PCIe x1 model)
You're looking at around $127-$140 per TB for a RAID5 NAS just on parts that enable you to use RAID5.

I can tell you that 2 standard 3TB hard drives cost about $90 each, and there are ways to enable RAID1 or mirroring via software without needing RAID1 support from either the chipset or an external SATA controller. Hell, you can fit a 2-drive RAID1 array of 4TB Toshiba X300 models in the same budget as your 3TB RAID5 array.

And that's not even getting into the downsides of trying to rebuild a failed RAID5 array if only one drive out of the four fails, or how you don't benefit from striped or striped parity read bandwidth because you're bottlenecked by the 1GbE network connection anyway, making any modern single hard drive more than adequate for reading off of.

Someone here said they didn't want to lose data ever and suggested RAID was the way to solve this problem.

How often do raids fail?

Then that someone was fucking wrong and a bucket of AIDS-infected crabs
Redundant disks are NOT a back up solution, they are merely a data-availability solution.

Are you asking how often hard drives fail or how often data corruption goes undetected across multiple drives in a RAID? Hard drives are going to fail. This is one of the reasons you need RAID if you can't have any downtime.

If you can afford it, consider adding two more drives and do a RAID6.

If you do a 4 drive RAID5 then there is a small chance that a second drive will fail while a failed drive rebuilds. The odds of this are not huge, they are not as big as people make it out to be. That being said, it does happen to a few unlucky people who then become very vocal.

Regardless, with a 6 drive RAID6 you get the storage capacity of 4 drives and if one drive fails you and another fails while your array is rebuilding you're still fine.

As for motherboard, it doesn't matter. It really doesn't beyond the amount of SATA ports it has. If you want a 6 drive array then you'll need a motherboard with 6 SATA ports. That's it.

You don't need a motherboard that has "chipset" support for RAID5. There are no motherboards with actual hardware raid, what you would get would be BIOS software RAID which uses the CPU. These should absolutely be avoided. Standard linux mdadm software RAID is the way to go. If you use mdadm and your motherboard fails you then you're fine, all you need is some other motherboard with the amount of SATA ports you need.

I have an old Athlon II system with 4GB RAM that works as a 6x3TB RAID6. I get gigabit speeds copying files to and from it. You don't really need that much for a NAS.

So as someone who's looking at getting a RAID setup for managing large amounts of photography and video RAW files, am I better off with just going for RAID 1 and keeping my important stuff on a separate backup drive? I've been looking at getting RAID 5 but if what you're saying is true then it sounds like it's not worth it. I mean, how often does drives fail anyways?

That depends on what you're doing with your disks. The more you do anything to those disks, the shorter their lifespans get due to wear-and-tear, and most of the failures that causes a drive to fall out of a RAID/JBOD array is caused by mechanical faults. An intelligently planned out NAS or SAN tries to avoid putting too much load on disks that are not designed to handle the workload being asked of them and use redundancy in parallel with external back ups to preserve mission-critical data. Moving away from traditional RAID arrays to self-healing or feature-rich parity/striped/mirrored/mixed arrays
is also hugely attractive since RAID has severe limitations on how it processes errors in any step of the read/write stages, among other outdated designs and characteristics.

In all honesty, all hard drives made today by every major manufacturer have such similar reliability figures that choosing one over the other purely due to their supposed reliability is irrelevant. What you want to look for are drives with the longer warranty period and the better warranty process. That's why people tend towards Seagate NAS drives, because they slap a 5-year warranty with those products compared to their regular drives.

>You don't need a motherboard that has "chipset" support for RAID5
Then you become reliant on support from the OS, and not all OSes support advanced drive configurations out of the box or as well as chipset-enabled or controller-enabled RAID. It's definitely possible to use older systems with non-RAID supported chipsets for a RAID-based NAS, but then you put additional load onto the CPU in order to handle parity checks and rebuilds, which becomes catastrophically long if you try to rebuild a parity array after a single failed disk.

There is no other choice for secure storage than mirrored arrays. Anything else is likely to fail in rebuilds. Also, using as few disks as possible will make your data safer. A two drive array has less chance of failure than a four drive array so taking as large of drives as possible is recommended. You can google for horror stories of RAID5 rebuilds all day long. Even linus dipshit tips tried it and got rekt within months. HGST is still GOAT.

Are you serving these files to someone? If not, get an external backup drive and use rsync. If you absolutely can't lose anything, use an offsite backup service like crash plan.

RAID 5 isn't bad, but it does not scale well.
It's not a big deal with 3 drives. But companies are moving away from it as arrays get bigger.

From a network drive on a 1GbE network? RAID1 plus back-up, definitely. RAID5 generally isn't worth it anymore from a cost-stand point, as greater-than-4TB drives have come down in costs significantly.

> I mean, how often does drives fail anyways?
That depends on what you do with those drives and what workload those drives were designed for. If you have files that absolutely must be kept online and accessible, but also cannot be lost, then you'll need to ask yourself how much those files are worth to if in the event that you do lose them. Are you running a business where those files can mean the loss of thousands of dollars? Then you should be investing a lot more than what you're investing right now in redundant storage.
Are they personal files which have very little financial cost to you, can be replaced within reason, and don't need to be accessed 24/7/365? A simple RAID1 array with a offsite back up disk will do just fine.

>these niggers think raid is a backup solution
Raid blows for data integrity, how does the drive know which data is good and which is spewing random crap?

Nope, just so I can get increased performance over individual drives without taking a bank loan and get a bunch of SSDs

>Then you become reliant on support from the OS

I'm fine with storing my data based on a bet that the Linux kernel will be around 10 or 20 years from now.

>how does the drive know which data is good and which is spewing random crap
That's why I think he should look into self-healing or correcting file systems. I've lost maybe two years of work due to a SAS cable going bad in one of my workstations. Never going back to regular RAID again after that.

But can the other computers in your network access and use the same file system you're using on your Linux servers? If you have any Wangblows of BLACKEDintosh OS in your network that needs to grab files in a ZFS or BTRFS array, then you're shit out of luck there without resorting to some round-about method through virtualization or dual-booting different OSes.

That guy's an idiot. Even when/if Linux dies, all the source is available and you can either support yourself indefinitely or do what you need to do to migrate. On the other hand, if you have a problem with a RAID controller and the manufacturer goes under, you're fucked.

HGST is just a brand of western digital nowadays. They no longer operate as a separate company. In fact the 8TB wd reds and 8TB HGST NAS drives look exactly the fucking same, same helium casing and everything. I doubt very much the internals are any different desu.

Toshiba got control of HGST's 3.5" manufacturing facilities a few years ago.

You're best bet is WD/HGST or Toshiba. Seagate can be okay but more of their models have shit the bed with huge failure rates in the past few years than the other manufacturers.

Then do RAID0 with a scheduled backup. Make sure your backup program backs files up by checksum and not just straight copy and replace.

Any recommended backup programs?

You need to lay off posting inane comments and read about samba.

Obligatory reminder that you really want ECC RAM if you're doing software RAID.

> software level raid 1
You're probably thinking of GlusterFS or someting like that. Rsync syncronization may work once every 5 minutes because it'll take time to build and send a list of modified files, maybe more than one minute. Gluster writes a data stream simultaneously.

HGST drives are way more expensive for me (like almost double) so shouldn't they be better? I see MTBF ratings of 2.5 million hours on HGST drives while WD Red only has 1 million hours.

Not OP but I want to start archiving some multimedia content. Primarily video and maybe some audio as well. The 3.75TB of storage I have on my desktop is nearly full and I can't add more hard drives because no more SATA slots.

What I am thinking about doing is investing in a RAID enclosure and some drives. I want my storage to be reliable and I want to have some redundancy in case of a drive failure so no RAID 0. I need some recommendations for reliable hard drives, a decent RAID enclosure (that is as cheap as reasonable), and the best RAID format for my purposes.

What I want out of the system:
Reliability
Redundancy in the event of drive failure
Easy to add new drives and or upgrade old ones.

>using RAID5 with more than 3 disks
>willing to spend up to a week rebuilding a failed array and have a greater chance of another drive failing during the rebuild
I'm the idiot? That's fucking rich kiddo. Good luck successfully rebuilding your greater-than-4-disk software RAID5 array. I'll stick with a non-parity software arrays with smaller numbers of disks in the array and multiple redundancy instead.

You should be telling OP that. Who's to say that other users in his network are knowledgeable enough to use SAMBA, let alone know of its existence? Always cater to the lowest common denominator.

Not anymore then when not using any raid at all. If you're thinking about zfs silent corruption cascade, that has also been debunked.

>Linux doesn't have NTFS support
>smaba doesn't exist
>file transfer protocols don't exist
>OS X doesn't have ZFS support
>OS X doesn't have NTFS support

If you're calculating parity, you want ECC. No?

ECC won't prevent corruption once that data is no longer stored in memory, nor will it be able to correct such an error.

See >Always cater to the lowest common denominator.
Not everyone out there is as neckbearded as Sup Forums.

how the fuck else are you going to mount a network resource on windows if not from samba? Like nigga, just add a network drive from explorer, it's not rocket science.

>ECC won't prevent corruption once that data is no longer stored in memory

Well of course not, but it will at least make sure that the data makes it onto the disk as intended, instead of getting corrupted during parity calculation before it even hits the disk.

If you're using Win10, install Ubuntu. Windows drives are in /mnt. Crontab -e to set up an rsync cronjob in whatever time interval you want. OSX and Linux should already have it installed.

OP, don't listen to these anti-RAID memers, RAID is great for what you're doing. You will need to backup the RAID to achieve your goal of never losing data.

Since you're building a 4-drive array, you might consider RAID-10 instead of RAID-5. It is faster and more reliable, at the cost of 1 disk of storage. I would spend a little extra and get 4x 2TB drives in RAID-10 for 4TB storage. But if you want to save money your RAID-5 is fine too.

If your data is corrupted before it hits the disk, that really doesn't have much to do with your choice of filesystem or raid level, hence my previous statement, that it will not help anymore that in a non-raid configuration.
As a general rule, I still think adding ECC RAM is a very good idea, but for reasons unrelated to actual storage.

I've worked in large corporations where the average worker didn't know the difference between their LAN network and the internet. Always assume that the end user is dumber than a sack of bricks.

Nigger, let me tell you that ECC memory does jack shit if your SAS/SATA cable or controller fucks up. That data was fine when leaving memory, but fucked as soon as it hit the drives (a bad SAS-to-SATA breakout cable in my case). The only safe method is using a self-healing file system and redundant back up drives, with an aggressive back up schedule for at least one of those back up drives.

RAID5 makes no sense to his use, but RAID1+back-up fits his need just fine.
>RAID-10
He is bottlenecked by his 1Gbps connection. He won't benefit enough from a RAID10 array to make it worth it.

>I'm the idiot? That's fucking rich kiddo. Good luck successfully rebuilding your greater-than-4-disk software RAID5 array. I'll stick with a non-parity software arrays with smaller numbers of disks in the array and multiple redundancy instead.
What the fuck are you talking about? I never said anything about RAID5 you stupid piece of shit.

>You should be telling OP that. Who's to say that other users in his network are knowledgeable enough to use SAMBA, let alone know of its existence? Always cater to the lowest common denominator.
Are you so stupid that you've never heard of a system administrator? Either way, configuration of samba is done server side and takes maybe an hour in the wiki if you have no idea what you're doing. I set up a SAMBA share on a Linux box when I was 14. Utilizing it on a Windows machine is essentially plug and play. Don't talk about shit you know nothing about. SAMBA is the open source implementation of Microsoft's protocol. Windows supports it natively.

I was talking about home network users, cumskin.
>I set up a SAMBA share on a Linux box when I was 14
Yeah, and I bet your mother, your wife's daughter, and your wife's boyfriend can't do the same when you're not around.

what's the point of making a NAS? Wouldn't a server be better? I mean, all those NAS has shitty underpowered CPU and RAM, you could be better getting a second handed xeon or old i5 and just run a server. If you have it on your living room you could also use it as a second PC to browse your shit directly or whatever.

I was considering get a NAS and toss there all my HDDs and some others, but building just an old PC and using it as a server would be a better idea since I maye get more bays for lower price.

You're just being nonsensical at this point. If you're building your own NAS< configuring it is part of the project. None of those technologies burden users of the NAS. How the fuck do you think pre-built, Linux based, NAS operating systems communicate with clients? Fuck, I guess since it's so hard for Windows to communicate with a Linux server, we should just shutdown most of the web, because the lowest common denominator doesn't know how to configure a web server that can communicate with their Windows laptop.

It may bottleneck sequential reads/writes, but it still helps a little with random r/w

>that it will not help anymore that in a non-raid configuration

Except that in a software parity RAID configuration the number of operations upon your data in main system RAM before being committed to disk is necessarily higher, thus increasing the chance of a corruption compared to a non-RAID configuration - so of course ECC will help.

NAS is network attached storage, the hardware and software configurations mean jack shit, you could have a NAS with 4 xeons and 2TB of RAM

jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

I know, but most pre-built NAS are literally shit with atoms or worse.

Jesus fucking Christ on a stick, you've missed my point completely. Just stop.

>the average worker didn't know the difference between their LAN network and the internet.
Irrelevant. Adding a network drive is four buttons in file explorer. Either a system administrator will preconfigure work machines, or there can be a posting of the four steps to add on. Alternatively, you, the administrator, map a url to the NAS, and setup ftp or something on the NAS. All Joe has to do is go the link and download whatever he wants. Either way, nothing you've said is a reason not to use Linux on a NAS, and especially software RAID on Linux, Nothing to do with it whatsoever, you're just a fucking moron.

>I was talking about home network users, cumskin.
Me too dipshit. the point is SAMBA configuration is done on the server. once. Presumably by an administrator that can read instruction. All network users have to do is add a network drive, which takes four steps. There is literally a large button for it in the Explorer ribbon. What the fuck are trying to prove? No one has to know shit about SAMBA to use it you stupid piece of shit.

>Either a system administrator will preconfigure work machines, or there can be a posting of the four steps to add on
And how many homes have their own network admin available 24/7? Exactly. When something goes wrong and the next competent person isn't there to fix it, then what are the end users supposed to do? Like I keep saying, cater to the lowest common denominator.

Now get into your little safespace and sit the fuck down white boy.

>a NAS isn't a server
Holy shit.

raid1 is best

What exactly is your point, retard?

You literally don't have one. There's no reason not to use software RAID in Linux on a NAS because of user interface. It's transparent to the user. They just see whatever interface is configured.

I'm not even sure what you're trying to argue anymore after you've been told off by multiple anons. How exactly do you plan to carter to the lowest common denominator? How can you simplify adding a network drive from explorer with a Windows NAS?

What the fuck are you talking about? No company is going to have secretary Jill bring back up a downed server you retard. If something goes wrong server side, that's tech support's job, and no regular user should even have the ability to fuck with it.

Fuck that carebear shit, striped raid5 or nothing.

see I meant to say most NAS are just crap pre-built shit.

Just to cap off your stupidity, even OS X has native support for accessing a SAMBA share in Finder.

That's just for simplicity and to keep costs down.
Joe Schmoe isn't going to configure a CentOS server just so he can store the family pictures.

Who SnapRAID here?

I'm thinking I may eventually go Snapraid with Openmediavault, maybe something like btrfs for pooling, just so I can throw in spare drives at will without worrying about them being identical.

samefags

Well, it is, buuuut...
- locked bootloader
- a limited set of applications
- dumbed down Busybox/Linux
- No support from a vendor = outdated software forever.

>all NAS are the same

>realizes he's retarded
>calls samefag
Classic m8!

By the way corruption cascade is real for btrfs, I had it a few months ago. Switched to zfs since then.

No, you're likely comparing apples to oranges.

HGST enterprise class drives are the more expensive ones with the highter mtbf. WD has enterprise class drives with high mtbf as well. Likely the exact same as the HGST ones. Consumer grade HGST has the same mtbf of 1 million hours as the WD stuff.

The expensive enterprise class drives are noisier and user thicker gauge components internally (which is why they are noisier). But they can still die.

What about Win 7 users?