Discuss.
Raid 10 vs Raid 01
Other urls found in this thread:
serverfault.com
twitter.com
hdds should not be wasted on raid
Both are meme.
Raid 6/60 is where it is at
>wasted
You've never had to recover data ever have you. Even with daily backups redundancy is key
RAID 10 is always the same as or better than RAID 01
In that particular configuration, they're the same.
> t. brainlet underage faggot who can't comprehend the importance of redundancy.
Once again both are absolute trash, use zfs.
>using hardware raid in fucking 2 - 0 - 1 - 7
-6
>not using a backup raid card for your main raid card which is fully mirrored properly across disks.
>not having a back up raid card in packaging in case either of them die
raid 600
>spending $1000 before buying a disk
also, enjoy your bit-rot
>not using 21st century solutions such as zfs or ceph
>hardware raid
>I dont know what MPIO is
stay retarded user
>recommending pleb tier meme FS which can't scale across multiple servers
>Implying someone is asking for a multi-server application on Sup Forums.
anyways listen to the poster, just use lustre like a real man.
>his array cant withstand a server going offline
laughingsluts.tif
>striped mirrors cant scale across servers
lol no
>Windows Server
What's your home country?
US
> being homosexual
kys
>i wish i had that much storage
>58tb drives
>wastes so much on raid that under 20gb of it is actually usable
>I dont understand what free space means in a storage pool
stay eternally butthurt user that I likely have more SSD capacity than you have HDD capacity
>SMB
hardware raid, like hardware graphics, or hardware sound processing, is and will always be king. Especially in server/production environments
So?
You do realize RAID cards just use standard processors which happen to have SAS PHYs attached? They dont have some special ASIC which magically makes things faster. My LSI 3108 is just a dual core PowerPC G4 with 8 12gig SAS channels.
Yes, of course, but it's a hell of a lot better than having that load shoveled off to the CPU, using system memory. Hardware raid is and will always be the superior solution
enjoy your data loss when the controller breaks
>So?
how does it feel accessing your networked storage over the most garbage and bug-ridden
proprietary remote storage protocol ever to be thrown together by Microsoft for the purposing of monkey patching DOS?
The load of RAID on a modern CPU is incredibly negligible, and software RAID implementations get updates and bug fixes that hardware RAID controllers generally do not.
And how is it garbage or bug ridden?
Hardware RAID cards get firmware updates all the time.
Trusting LSI etc.
literally realtek tier. Also no data corruption protection.
And what is wrong with LSI?
Raid + UPS + Backup (Offline) will protect you from pretty much anything. For the real anal minded, take the above configuration and add a 2nd backup drive to the mix (course depending on how much data you got, a single drive large enough just don't exist yet). Thus this 2nd drive can be stored offsite, real portable, self contained so if your house burns,etc, least your data, all of it, is safe. By all of it, I mean data + server system image + all client backups. Also don't forget to keep a separate copy of your backup application in case you need it
closed source and shit utilities
>muh open sores
It isnt if you're willing to sign a NDA with them, its how Areca/HP/Dell/Lonovo/etc make their own branded cards.
>utilities
go with a 3rd party then just as with GPUs, my Areca has its own network card and webui
If the drives are of equal parity/amount then wouldn't 01 be better? In 10 if 1 drive dies the whole array is marked bad, in 01 if 1 drive dies then only that drive is marked bad. Is that right?
Best to use something else either way imo.
Isn't the main appeal of software raid that you're not locked in to a specific card? If you change hardware later you still use the same software, but if you go with hardware raid you're stuck with whatever it has.
No, the main appeal of (single server) software RAID is for poorfags who dont want to drop $500-$1500 on a RAID card.
>Redundancy in the same fucking machine
you're referring to SMB1, which has been dead and buried for a looooong time if you don't have to deal with NT < 6.0
>you're referring to SMB1
Anyone who still uses SMB1 should be shot.
Even SMB3 frequently has weird quirks and performance issues crop up. It's far less reliable than any of the alternatives
>i still have no sources
You only need redundancy if your are unfortunate