/hst/ - Home Server / Lab Thread

Post Christmas Edition.

OP's question to you: I want to build a NAS with $1500, any NAS build guides? Just for plex with only a couple clients transcoding.


Anyways post your networks, your servers. Blog about your configs user

Other urls found in this thread:

m.ebay.com/itm/3WARE-8506-12-12-Port-SATA-RAID-Controller-Card-With-Cable-/192051877898?hash=item2cb72f180a:g:sVgAAOSwOtdYT1Mo&_trkparms=pageci%3Ac4c15f7d-cc86-11e6-91f4-74dbd1805bb8%7Cparentrq%3A4279aeda1590a786f3b204d4ffde627e%7Ciid%3A2
ebay.com/itm/HP-StorageWorks-Modular-Smart-Array-20-MSA20-3x-500GB-HD-335921-/380793314095?hash=item58a90cdb2f:g:mMMAAOSwA3dYXD7q
ebay.com/itm/LSI-PCI-E-6GBs-SAS-HBA-CONTROLLER-CARD-H3-25379-01E-SAS9201-16E-/252699039850?hash=item3ad609586a:g:ptwAAOSwA3dYCkfp
pcpartpicker.com/list/xYdPFd
newegg.ca/Product/Product.aspx?Item=N82E16813157475
supermicro.com/products/nfo/Xeon-D.cfm
newegg.ca/Product/Product.aspx?Item=N82E16811112459
youtube.com/watch?v=TXwdQszP6mo
github.com/superkuh/snoopers-obfuscator)
twitter.com/SFWRedditGifs

>I want to build a NAS with $1500, any NAS build guides? Just for plex with only a couple clients transcoding.
kinda depends on how much space and redundancy you're after, no?

I put mine into one of those cheap Rosewill 15-bay 4u cases they sell on Newegg. I had the space, so why not.

I guess 3-4tb at least with 1x mirrored,so 6-8tb min total storage. I found some deals on some old tower servers with 2x quad core xeons at various speeds and with 32-74gb of ram so I was considering buying one of those and filling it with drives.

32 2tb Hitachis off ebay for $1000
2-4 raid cards for $180
Random server at most $100
16gb memory upgrade $30-50
Various power adapters/Sata cables $150
Whatevers left on fans and zip ties.

>that 1950 not mounted to rails
kill yourself OP

>2-4 raid cards for $180
you clearly dont know what SAS expanders are for do you

In raid 0

>SAS expanders
For fagits

m.ebay.com/itm/3WARE-8506-12-12-Port-SATA-RAID-Controller-Card-With-Cable-/192051877898?hash=item2cb72f180a:g:sVgAAOSwOtdYT1Mo&_trkparms=pageci%3Ac4c15f7d-cc86-11e6-91f4-74dbd1805bb8%7Cparentrq%3A4279aeda1590a786f3b204d4ffde627e%7Ciid%3A2

>SATA
>not SAS

>12 ports
>when recommending 32 disks

Yes Sata, for the Sata Hitachis.

12x3= 36 ports > 32 drives

also
>PCIX
who the fuck even has one of those slots

>12x3= 36 ports > 32 drives
again, fucking pointless. you get a 8 port SAS card and plug it in to a SAS expander. Also let me guess, you're retarded enough to not know what STP in SAS is do you? and who the fuck has 3 PCI-X slots? what fucking year is this?

>who the fuck even has one of those slots
people who buy ancient C2Q-based Xeon boxes, probably. Why they do that instead of getting something more recent, I can't imagine.

Look you fucking faggot, my hypothetical fucking list of parts was not ment to be some perfect thing, just something that can be thrown together for $1500. You go ahead and try to fuck with SAS for $500.
You can pick up a 64 bit server with 3 pci-x slots any fucking day of the week, I have 3 collecting dust that I got for free.

All that shit sitting on a carpeted floor.

I wonder what the power consumption for all that crap is...

>was not ment to be some perfect thing
so basically what you're saying is that you were speaking as if you were some authority figure, and since you got btfo, you're now trying to do damage control

>You go ahead and try to fuck with SAS for $500.
12 disk SAS2 shelf for $65 each
ebay.com/itm/HP-StorageWorks-Modular-Smart-Array-20-MSA20-3x-500GB-HD-335921-/380793314095?hash=item58a90cdb2f:g:mMMAAOSwA3dYXD7q

A LSI 16 port SAS 2 external HBA for $60 which can be flashed with IR firmware if you want RAID 1/0
ebay.com/itm/LSI-PCI-E-6GBs-SAS-HBA-CONTROLLER-CARD-H3-25379-01E-SAS9201-16E-/252699039850?hash=item3ad609586a:g:ptwAAOSwA3dYCkfp

I'm too lazy to search for 5/6 capable cards due to how many LSI rebranded ones there are but you can find them at similar prices. This shit isnt hard. There is tons of it from decommissioned equipment. You're just a retard recommending consumergrade shitboxes.

>some authority figure
I just made a list buddy. Never said to do it.

I could go all out and comment on the shipping costs of 3 of those boxes and how that suits going to take up a ton of space. But ehh, I didn't honestly know SAS was getting down there in price.

The main problem you will be facing with this though is your limits on raid. With my suggestion, you are able to just use ZFS with 4 redundancy disks, but with this, not so much.

>ebay.com/itm/HP-StorageWorks-Modular-Smart-Array-20-MSA20-3x-500GB-HD-335921-/380793314095?hash=item58a90cdb2f:g:mMMAAOSwA3dYXD7q
fuck me, should have been a MSA60, not a 20. anyways they're still readily available on ebays for cheap.

>muh space
thats what racks are for. if you want 32 disks it will take up space

> With my suggestion, you are able to just use ZFS with 4 redundancy disks
wow you're getting desperate now. recommending raid cards but now wanting to do softraid. and anyways i linked a HBA, if you really want to do softraid you still can.

I could grasp on straws all day.
Regardless, it's two different, yet mostly within budget ways to get 56TB. Which is fucking pleb tier.
It's like video cards, they are all shit until you get into the $200 range.
Well storage is all shit until you get into the $5000 range.

not really all that much, since they're made to be running 24/7

>Well storage is all shit until you get into the $5000 range.

>not really all that much, since they're made to be running 24/7
lol no, i have a dual E6-2660v2 and a Catalyst 3750E. With a 10% CPU load and 1% load on the switch, power consumpution is about 450 watts. Those old shitboxes are going to be even more. From the bar graph on the UPS, you can see it is drawing about 1kW. That UPS is certainly not big enough for it all and im sure that fag has overloaded it before.

>E6
E5

What's your electric bill like each month?

I went with a Dell R510 off ebay for $300 delivered and I'm running freenas and have the plex server plug in running works awesome.

>apple
This explains EVERYTHING

>I dont know what ESXi is

You're using a mac, it's irrelevant if you have an esxi box at that point.

>you have a storage subsystem over $5k
>you're not allowed to have a $2k laptop for web browsing, remote desktop and ssh

>Building
>Get everything together
>Push power
>Surge protector goes crazy
>quickly turn off
>Take apart and try with just mobo
>Okay
>Screw back on CPU
>coolers screws are stripped for some fucking reason

it's been over 3 weeks and I have all the pieces just sitting here. I have no motivation to proceed.

You faggots sure do care about people's power bill. Even with everything running at 100% in the picture total power pulled is around 2500W which where I live is about $7 for 24 hours. I hardly ever have everything running at the same time though.

There are better options then a mac.

Best rack

you cant even mount the ears properly.

On the switches? That's so the door doesn't fuck them up.
What is proper in a 4post made of 2x4 anyway?

I wish there was cheap as shit used servers in my country.

why does that matter?

He has a rack made out of wood and a stack of calling cards from Las Vegas prostitutes. Something tells me he doesn't give a shit.

whats the best way to disconnect backupdrives and reconnect them as easily as possible?

Set it up so it powers down them after like 5 minutes of inactivity?

I have a question, I guess this thread is the most likely one to actually get an answer. I bought a used Dell PERC H310 and flashed it with IT firmware, because I intend to use the drives on it for software RAID, as such wanted the card to provide direct access to them. It's running on up to date Debian stable and seems to work normally.

Is there some sort of recommended testing procedure to follow in order to make sure the HBA is working properly before I add extra drives to my RAID and risk losing data?

Wut, you bought a hardware raid controller so you can have a software RAID array? Software raid is and has always been a terrible idea if you care about the data you are storing.

lol wut. Hardware raid is a performance penalty that doesn't justify itself in 2016 when there are decent filesystems and ECC memory around.

That said, why the fuck would you buy a raid controller when you want a HBA?

Obviously software RAID implementations such as ZFS and BTRFS are fine but when 95% of people say they are using software RAID they mean the shitty onboard implementations on the their motherboard or in Windows which is a terrible idea if you care about the integrity of your data.

>i dont understand why raid cards exist

Because this thing works as a HBA with the proper firmware installed, can be found for very little and the only alternatives I could get where I live were various Marvell-based cards which seemed to have some potential issues with SMART under Linux. The posts I read were a few years old, so the Marvell controllers could work perfectly well nowadays, but I wasn't able to find any confirmation one way or another. The issues I read about weren't just about wrong readings or something like that, but the controllers fucking up and dropping drives (until reboot). That obviously would've been very bad in RAID and I didn't feel like taking the chance. Getting one of these or something similar was the cheapest way to add more SATA ports with a controller that at least appeared to be reliable.

wut

I've got no money nor use for such a huge server rack.
I want to host a simple webserver on Tor.
A simple orange pi pc with a 30gb tf card should be sufficient right?

For a very low traffic server it would be fine.

offloading things from the cpu you tard

Well yeah there's only so many Tor users.

This is my current server:
pcpartpicker.com/list/xYdPFd

I recently upgraded my desktop from a 4960k to a 4970k, so I may move the old CPU into my server. I will have to get another motherboard for that though.

I also got another 4tb drive for Christmas, and I am going to install a 5 drive hotswap bay. I want to set up a ZFS array at some point.

wut

Would this be a good mobo for a RAIDZ3 setup?
I asked about it in a /pcbg/ thread and got this response:

>WEEW LAAD
>I have that board. You better be using a JBOD card on that for all those hard drives. The SATA controller that's driving the extra 6 SATA3 ports are utter shit that thermal throttle under moderate load. Speaking of thermal throttling, they use a thermal pad to stick the heatsink on the CPU itself, but the thermal pad is abnormally shitty in transferring heat, so you need to have moderate air flow blowing through that heatsink (blowing in the direction of the fins and RAM, not top down). Also, ASRock's IPMI is shit, don't use it if you don't want to deal with headaches and broken interfaces.

>Don't get me wrong, that 8-core C2750 Avonton is powerful enough to handle multiple requests at once as a file server, it's just that ASRock cheaped out on some of the neater additions to their board and made it more of a disappointment to use.

Would I be better off getting a different board and using a SATA controller? I want this to be as low-power as possible.

>ASrock
>Not SuperMicro

>Atom
>Not Xeon-d
You could not be a fag and buy a SuperMicro Xeon-d. have 16 SAS2/SATA3 ports from a LSI chip + 4 SATA3 from the chipset, have 10GbE, and not pay out the ass for ECC UDIMMs and buy common RDIMMs instead. They sell them from 2 core to 16 core versions.

Just get a Supermicro board be done with it.

I'm sorry the 2 core version is a Pentium-D, it still supports ECC RDIMMs and 10GbE though. Only 25 watts too.

Forgot link:
newegg.ca/Product/Product.aspx?Item=N82E16813157475

I'll look into Supermicro, thanks.

supermicro.com/products/nfo/Xeon-D.cfm

scroll down to the bottom for the bare boards.

>newegg.ca/Product/Product.aspx?Item=N82E16813157475
Its a 20 watt atom chip, versus 25 for the 2 core pentium-d, 35 watts for the 4 core or 45 watts for the 8 core. And you'll have an actual LSI chip instead of the shitty on boards with limited queue depths. Twice as many memory and PCIe slots, and 10GbE (which is on the CPU die). If you're dead set on a old Atom chip, SuperMicro still makes them too.

Thanks.

still need to buy 4 more 6tb reds to fill up the top one

No reason really.

>HP-StorageWorks-Modular-Smart-Array-20-MSA20
Opinion discarded...

These things are ancient and use SCSI interconnects between server / chassis. MSA 60 is better bait...

Could this case hold 11 3.5" HDDs?
newegg.ca/Product/Product.aspx?Item=N82E16811112459

There seems to be space for 10 3.5" and 1 2.5" at the bottom, but I'm wondering if it could hold a 3.5" instead.

If by 3.5 you mean 2.5, yes

Do Mini ITX cases with space for more than 10 3.5" HDDs even exist?

Probably not, hence why they specifically mentioned it only supporting ten 3.5" drives and one 2.5" drive.

An alternative that might be worth considering is buying an external SCSI/SAS enclosure and loading it up with 5.25" to 3.5" hot-swap bays, then using another computer with a JBOD controller to connect to it via mini-sas.

how do you even connect 10 drives on mitx? even matx has only 4 ports

Get a mobo with 6 SATA ports plus a SATA controller with six more.

Should I look into a used server for my home server needs or is that overkill? I will be using it mostly as a media server, with plans to also set it up as a nextcloud server. I am hitting the limitations of my raspi3 as far as connectivity.

The servers posted on this are definitely overkill beyond what a basic home server should be.

Although if you really are interested in enterprise server gear and you have the money, go for it. You'll learn a lot of cool things.

What's the best price to performance for server boards? Supermicro? Any other good things from that company?

>Probably not, hence why they specifically mentioned it only supporting ten 3.5" drives and one 2.5" drive.
Are there 8TB 2.5" HDDs?

>An alternative that might be worth considering is buying an external SCSI/SAS enclosure and loading it up with 5.25" to 3.5" hot-swap bays, then using another computer with a JBOD controller to connect to it via mini-sas.
Would this work with a RAIDZ setup?

Supermicro is probably one of, if not the best manufactures right now for server boards. You really can't go wrong with them.

They also make server/workstation cases, power supplies, add-on cards, network switches, and lots of other cool enterprise gear.

Can't really speak for price/performance, although Tyan is another server motherboard manufacture that has good reputation. Might be worth checking out.

>Are there 8TB 2.5" HDDs?
I believe there do exist 4TB 2.5" drives, however they generally run at slower 5200K RPM speeds.

>Would this work with a RAIDZ setup?
It should work just fine, there's really nothing super special about the setup. It's literally just a box with lots of 5.25" bays, a PSU, and centronics 50pin expansion ports for whatever connection type you want.

Thanks for the input. Was wondering what others thought about the brand. Have a dual socket Supermicro board that's been working flawlessly as well as a 2u case. While the case and stock fans aren't top of the line in terms of sound proofing it's functional. Sound proofing is kind of a luxury feature in enterprise gear anyway

I deal with Supermicro products on a pretty regular basis, there's really not much bad to say about them and rarely do I have issues.

Call me a shill but they really do have a slick product selection and I wouldn't hesitate to buy from them after working with their stuff.

>Call me a shill but they really do have a slick product selection
No, I've had no problems with all the supermicro hardware I owned as well so I guess it's safe to conclude it's a good brand.

Anything to do about loudass fans in 1U/2U servers like HP Proliants? I can't just unplug them because then the server beeps at me

Seriously these things are a god damned jet engine

youtube.com/watch?v=TXwdQszP6mo

Seriously listen to this fucking shit

I had no idea it was this bad when people said rackmount servers were loud as fuck

If you want quiet you get a 4U or a tower server.

then they could be powered on and destroyed if somerthing happened.

>buy supermicro board
>constantly beeps on startup, no way of turning it off
>board dies after the first week

Supermicro FTW!

Are they a proprietary plug or are there regular fan headers?

Here is my bing bang bad boy. Got free as on it ATM but having too many problems. About to install centos 7 on it and rebuild. Will be using ZFS too with 5*2TB WD redz. Xeon e3 1220 v3 and 16 jigz of memory. Will be using for Plex, device backups and some virtualization.

>something happened
Something happened

>MSA 60 is better bait...
read further down, I said I fucked up and should have linked that instead

1U, you're generally fucked, although I did manage to replace the fans in my Cisco ASA 5510 without it overheating albeit CPU usage never goes above 10%.

2u proliants are quiet as fuck

Old - replaced the router and removed all the figures and stuff.

DHCP + DNS (pihole w/ DNSCrypt), and running a DNS obfuscator because of the stupid UK IPBill (github.com/superkuh/snoopers-obfuscator) - of course it won't do much but it's pretty fun knowing any databases kept will have a load of garbage in it.

Emby server, VPN, transmission, ice-cast2(spotify) and backups for configurations of applications on my public servers in the DC.

It can handle a lot more but I don't really need much more than it's currently doing.

Sunfire 1u servers are quiet as fuck. Especially on start, try those

freenas with 1GB ecc memory per TB of data with a minimum of 16GB for ufs

past that just keep rolling on the drives.

or a nice qnap and leave the hardware and OS issues to people who do it all day,

Are xserves good?
I'm talking the 09 ones that run xenon x5500 series and ddr3 of course.

I just wonder because I got to experience a G5 based xserve, and it was damn near silent

Are the 09 ones silent as well?
Do they really need proprietary drives if I run esxi?

>dat Gaeon
Sexy gunpla.

might have figured out something was wrong and fixed it when it started beeping

or you know just be dumb

My server's pretty basic. Housed in atx mid tower 9 bay case. It along with my cable modem and switch is all kept in the basement. Most core devices are wired (2 Streaming Media devices/ 1 Desktop/ 1 Media Workstation). Rest are wireless ( 2 Laptops, 1 Tablet, 1 Phone,1 TV). Client pcs are backed up automatically by the server. The Server is backed up to a nas and to a external hdd.

Server: Windows Home Server 2011
Opteron-170 2.0 Ghz
4GB ram
1TB HDD (60GB os/871GB for client backups)
4TB (2x2TB Raid 0) - Movies/Music
6TB (2x3TB Raid 0) - Data
320GB - OS System image
Originally was going to setup a raid 5 array but nixed that cause of performance and the long rebuild times if a drive died. Plus the risk of data loss if during said rebuild time another drive kicked on me. But all in all I'm happy with it. Server's been running without major issue for several years now.

Why the fuck raid 0. You're retarded.

At least do a spanned volume, then if 1 drive fails, you can get half your stuff back.

>raid 0

nigger what are you doing

RAID0 is probably fine when you have backups. Don't really need 99.99% uptime when you're running a home server.

haven't read the thread yet but I'm sure it's like the others and just a bunch of anons bickering over whether or not to roll you own or just purchase a nas. here's my two cents, synology is the way to go

>This is what anti-apple fags actually believe.
You are the cancer ruining my Sup Forums