What's the best RAID for 3 x 7200 HDD of any size for mostly read-only (game scenery) use. I want SPEED...

What's the best RAID for 3 x 7200 HDD of any size for mostly read-only (game scenery) use. I want SPEED. Will back up to an external (probably 6 or 8tb). Intel 760p m.2 NVME for OS and programs (3.2gbps read 1.6 write).
>imagine NVME in raid

Other urls found in this thread:

linux.die.net/man/8/mdadm
twitter.com/NSFWRedditGif

Back up to external HDD I mean, the SSD is just boot drive

>What's the best RAID
raidz3

RAID0 is the only way to go if you only care about I/O throughput with just three disks.

All of the other RAID setups for a three disk array are just trade-offs between fault tolerance, regeneration speed and I/O throughput.

Protip: RAIDs are built around fault tolerance and data availability. Performance is only a secondary concern.

RAID0, but you're tripling your chances of data loss with that.

> game scenery
Since when does that require more than a single boring SSD, ever?

Game load times are CPU-bound (clockspeed/IPC) with a single, modest SATA 2.5" SSD.

RAID would actually makes things worse because of CPU and RAID overhead.

RAID really only belong in servers and workstations that need I/O throughput above all else.

Since flight simulator scenery started using Google earth and entire continents are available for download and it's not uncommon to have 20TB of scenery

Scenery is loaded in the background. Flight sim if you're doing 8000mph in a rocket the map needs to somewhat be able to keep up.

The map that is 20tb in size

So raid 0 was it?

What year do you guys think this is? Just curious

>wants speed
>uses 7200RPM drives
Eh

15k at least

RAID5 does give a read speed boost.
Also RAID 1 also can give a speed boost, if the MOBO supports it (mine doesn't. Writes to both but only reads from 1), but then you only need 2 drives.

What's price and warranty for 4TB 15k compared to 4tb 7k?

>Since flight simulator scenery started using Google earth and entire continents are available for download and it's not uncommon to have 20TB of scenery
Not my cup of tea, but loading scenery "as far as you can see" with the appropriate LoD shouldn't even really saturate one HDD the same way you don't need nearly need 1GBit to watch Google Earth.

Basically even if you used 20TB data sets why couldn't you do this off a RAID5 array on boring 10TB HDD?

> RAID would actually makes things worse because of CPU and RAID overhead.
It makes access latencies worse (ball park twice as big), but the CPU overhead of Linux mdadm RAID isn't really anything to worry about on a modern PC. Throughput can also increase though, so it's a question if you mainly load in big files for what you're doing.

> RAID really only belong in servers and workstations that need I/O throughput above all else.
And machines that don't want to go offline or loose data when a drive fails, at least the redundant RAID levels like RAID6/5. Those apply to almost "everyone" in theory because nobody is safe from drives failing.

RAID 0 and RAID 5 have identical read performance.
RAID 1 has slightly better read performance if you have a decent controller.

Differences being so small it's really just a choice between between capacity vs. redundancy.

>RAID 1 also can give a speed boost, if the MOBO supports it (mine doesn't. Writes to both but only reads from 1)

Just don't use integrated RAID.

Pure software RAID is generally much better.
Even the build in RAID 1 that comes with Windows 7 has excellent performance: not just reading from 2 disks at once but also queuing and balancing the drives efficiently.

>Pure software RAID is generally much better.
You mean generally much worse.

>better performance
>easier to configure
>isn't dependent on specific hardware

The last part alone is reason enough to stay the fuck away from integrated RAID: if your motherboard breaks you are fucked, good luck hunting down an identical model.

Software RAID is faster but the CPU overhead which was more of an issue back when CPUs were relatively weak and single-core.

It is mostly an non-issue with modern multi-core CPUs that are commonplace.

Software RAID is more flexible and is the only way to go if want to setup a nested RAID.

Is dual channel ram the equivalent of raid 0?

integrated mobo RAID (aka "fake RAID") also used the CPU

What case is that? Looks great.

Rosewill RSV-S8

Good to see that windows users are still stuck in 1998.

raid 0 if you give no fucks about the data loss potential.

try to keep it backed up, but other then that this is the best

some games, ones with shitty to no compression, follow a linear curve with how they respond to ssds, see fallout that manages to benefit from ssds due to bethesda's incompetence.

but other then these, most programs bottleneck in the 400-500mb read speed range where if the shit hits this mark, its not going to be faster no matter what you do.

Have 3x15krpm sas raid 5 with hardware raid controller with it's own cache etc.

Still slower than $50 chink SSD.

>What's the best RAID for 3 x 7200 HDD of any size for mostly read-only (game scenery) use
Depends on size, if they're >2TB RAID5 is good enough for data you don't mind losing. 3 drives won't do much in terms of speed though. Even if you're doing sequential reads you can only expect triple the performance of one drive.

You might as well buy a 4TB SSD at that point. 15k drives are stupid expensive when you go up in capacity.

...

the best raid is technically the one with 1x1 (parity x storage)

so if you have 8 drives, you would want 4 parity drives, if you have 10 drives, you would want 5 parity drives, etc

>imagine raid in NVME

I'm doing this later today(RAID 0 with 950 pro and 960 evo) , but I'm doubting any read/write speed increase will be with the access time increase.

There's nothing great about that, it's bascially just an excess of parity for presumably no reason.

What if one of your drives fail, and you're resilvering it after adding in the new drive and you lose all of your parity drives while doing it?

Tell me buddy, what if?

If I have 20 drives (of which 3 are parity) in an already fairly large array and one fails, I only need to manage to install and resync the replacement drive before I have *another 3* drives fail.

It isn't exactly a high risk situation.

From a protection and capacity point, Raid-5 is still good. Even more so when your volume size hits 10TB or more. ex; Raid 1 = 2x 10TB drives (10TB free space) @ $280.00 per drive (560.00)
Raid 5 = 4TB x 5 (16TB free) @ 90 per (450.00)
So with raid 5 not only is it cheaper but you gain 6TB more capacity than what you need (10TB) which is more room for expansion.

wtf are you even talking about? that's not how RAID works holy shit

you don't resync anything, you replace the faulty drive with a working drive and you wait for the RAID system to fill it up with the data which the faulty drive had and then it's done.

since you lost 1 drive, that means you have 2 parity drives during the RAID rebuilding process, if you lose those two drives during the process, you lose all of your data.

Rebuild and resync are essentially the same process, the only difference is whether you're doing it to a new drive.

>you don't resync anything
You just don't know typical RAID terminology, it's most definitely a resync:
linux.die.net/man/8/mdadm

Has been a resync since before mdadm existed.

> since you lost 1 drive, that means you have 2 parity drives during the RAID rebuilding process
Sure.

> if you lose those two drives during the process, you lose all of your data
False. It's still three drives you would need to loose before your data is gone.

It's not that exact. Resync also is used on the initial array creation, as you can see in mdadm's man page I just linked.

Of course others call it initial sync or (re)build or other things. No need to be pedantic though, it's pretty obvious what's meant anyhow.

If I want ~14tb of storage and a backup on top what's the best way to go about it? 3tb drives are best price wise, is raid 5 backed up onto raid 6 or something an option?

Take into account price of 2 NAS boxes, 4-bays ones are significantly cheaper than 6+bays.
I'd get two sets of 4x6TB drives from different brands in RAID5 each.

Don't get different brands, that whole cascading failure thing is bullshit. Get the same brand so they all have the same read speed, since WD reds are 5400, and Seagate ironwolf are 5900.

I mean 4 of one brand and 4 of another.

Ohhhhhkay, that makes sense