I wonder if anyone here has experience with the HP P410 and the IBM M1015 (LSI)

I wonder if anyone here has experience with the HP P410 and the IBM M1015 (LSI).
I need to be able to connect more drives and really do not know with which to go and was recommended both of those thus far.

I do not know if the P410 needs the battery or can go without it.
Otherwise the P410 cannot do HBA so it has to set each drive as a RAID0 from what I heard
I can get those for like 35€ with 512MB cache.

The M1015 are recommended if flashed to IT mode but some complain at the 8x PCI-E connection.
One of those is 60-120€ here so they are more expensive.

Otherwise I wondered about their performance, if one of them is known to be faster than the other.
That is why hearing from someone that actually worked with both would really be appreciated.

Other than that I already have a MegaRAID 8308ELP lying around here that I could Flash to IT mode.
I never used that one sofar as it is old and was gifted to me when I went to get my hands on some used hardware for cheap.
I think it only supports only 2TB drives and might be actually really slow.

Other urls found in this thread:

samsung.com/semiconductor/minisite/ssd/downloads/document/Samsung_SSD_850_EVO_Data_Sheet_Rev_3_1.pdf),
twitter.com/NSFWRedditGif

With the additional connections I would finaly be able to hook an SSD for cache to the system, I got a barely used Samsung 840 Evo 120GB lying around.
Can I use that or is it unsuitable for the task?

>SSD for cache
>Samsung 840 Evo 120GB
It has a laughable write endurance would be inappropriate to use as cache. Get an actual enterprise class SSD.

I wish I had that amount of storage.
Thank you for your advice, in that case the Samsung goes into one of my HTPCs.

Use servethehome forums, they have a deals section where people post used enterprise class gear for cheap

The 120GB 840 Evo's endurance is so bad Samsung doesnt even seem to publish a rating for it. The 120GB 850 Evo is rated at 75TB (samsung.com/semiconductor/minisite/ssd/downloads/document/Samsung_SSD_850_EVO_Data_Sheet_Rev_3_1.pdf), I can only assume the 840 is worse. The Seagate 600 Pros in my pic are rated for 2,600TB each.

Also in regards to the rest of your question. The P410 needs the battery if you want to use the write cache safely. As far as comparing them, look up what LSI controllers they all use, and find the queue depths they support. SATA disks support queue depths of 32, so multiply the number of disks by 32 and ideally your controller should support a queue depth of at least that.

That is kinda insane!
Even the Samsung pro series Looks like garbage compared to that.
Does this mean the memory module for the P410 is not needed?
I will look at what you recommended.

>Does this mean the memory module for the P410 is not needed?
The BBU isnt needed, just be sure to have the cache set to write through and NOT write back. It will help performance a little bit though since you said all the disks will be single disk RAID 0 for use with ZFS.

>Does this mean the memory module for the P410 is not needed?
Sorry I was talking about the BBU. I'm not sure if the card will function without the memory module. I can't imagine it would be that expensive. Used DDR2/3 ECC UDIMMs are cheap. If you do use it make sure you have the BBU, otherwise as I said change the cache mode to write-through.

you'll want to get the m1050 flashed to IT. There's a reason it's the go to for freenas. You dont want to run anything like the p410 with that fucked raid setup, it'll make zfs near impossible to configure properly
as for the megaraid im not even sure it has driver support
you dont need an SSD for a l2arc cache unless you're suffering lower cache hit readings. at default, RAM is used as the cache for freenas, which is much faster than ssd. Often if you put in an ssd for the cache drive you're actually lowering performance, so im willing to bet that you wont benefit from the inclusion. no worries there

>you dont need an SSD for a l2arc cache unless you're suffering lower cache hit readings. at default, RAM is used as the cache for freenas, which is much faster than ssd. Often if you put in an ssd for the cache drive you're actually lowering performance, so im willing to bet that you wont benefit from the inclusion. no worries there
I dont use ZFS at all but generally with tiering using a SSD will not lower performance. RAM cache will still be used, and when full will begin to use the SSD. Also frequently used files will be stored in a read cache. I'm not sure if any of OP's cards support tiering or if ZFS does. My Areca 1883ix-24 supports SSD tiering, although I prefer not to use it due to my data access patterns.

as i understood it in freenas you had to specify a volume to be the main cache, so if you wanted to take advantage of the ssd you would have to tell the systemto use it, compromising the higher speeds of the ram
that could be something with zfs however

No clue, as I said I dont use ZFS/FreeNAS. I know decent RAID cards support SSD tiering however, and things like VMware VSAN do as well.

if you dont want to buy a massive psu just for hdd spin up keep the original m1015 firmware and configure staggered spin up as every drive roughly needs 20+ watts just to spin up.

m1015 is the cheapest solution

why enterprise for consumer goods? if you are loaded with cash, sure but otherwise fuck that.

you can't partition l2arc, and 128gb for a tiny pool is a waste of a decent ssd.

>why enterprise for consumer goods
Because consumer parts have capacity limits. That box has 160GB RAM. Also increased reliability, i've suffered bad memory and had it not bring the box down due to ECC, and the SSDs have supercaps in them so the data in the write cache isnt lost if power is.

Seems like I will skip the P410 because the non HBA is too much of an impact.

From what I remember the SSD is only used to support the RAM, but it is a suggested because apparently ZFS likes RAM a lot.
Recommendations are starting at 1GB RAM for every physical existing GB of drive space.


-----
I seem somewhat unable to find the Queue depth for the 8308ELP

>1gb ram per 1gb of storage
no, its 1gb per 1tb
>recommended
unless you want serious problems with performance, it's required. another 8gb on top of the 1gb/tb is recommended. ie. 16tb system you'll need a minimum of 16gb ram, 24gb recommended.
also, you need ecc ram. that's really not an option for zfs.
how many drives are you connecting? the m1015 supports up to 8, and you can always add another, usually the most cost efficient method. also bear in mind that the drives need to be the same size for a zpool

>the m1015 supports up to 8
It should support a lot more than that if you use a expander.

Thank you for the correction.
Currently I have 5x 2TB and one 1x 8TB Seagate archive that I used as a back up for completly setting up the system anew.
I have 4-6 more 2TB drives lying around because I cannot connect them at the moment.
Hope to get some more later.
I just got my Hands on a Xeon 1230 and 4x4GB ECC RAM that will replace the i7-2600 and 4x4GB non ECC RAM.
Honestly speaking I got the Xeon and RAM for free, which is kinda great.

Just go for ZFS on linux. Id you use a raid card and it breaks in 10 years you might not be able to get the same replacement model and lose your data, whereas with software RAID like ZFS software can easily be redownloaded.

If you're going hardware RAID, you really should get one with a battery. Otherwise, the only thing that actually matters is the number of drives and the type of RAID you need.

using ZFS on linux with an HP H220

works just fine

obviously meant as affirmation that zfs is good and you should just use a HBA not a RAID card

Not actually planing on hardware RAID with ZFS, just that I Need something to hook up more drives.

Just that the amount I can spend on the controller is limited for now.
So I cannot invest 300-400€ just for it.