Server build

thinking about building a server. what does g think is the best server case size?

Other urls found in this thread:

blog.brianmoses.net/2016/07/building-a-homelab-server.html
twitter.com/NSFWRedditVideo

19" 8HE

well how the fuck should we know, you haven't said what you want to use it for. This is like asking "what kind of car should I buy?", you have to say whether you want to commute to work, go off-roading, win drag races, or what have you.

Rack or free-standing? Hard drive number requirements?

Not 1U. For home use 3U/4U is best because they will fit 120mm fans.
You should consider looking at old servers first though depending on what you need. Old hardware from enterprise use loses value fast and still is competitive with the consumer market.

i'm a big fan of 5u+
i keep my server in the house and i don't have a dedicated AC blasting it so it has to be able to operate under reasonable temperature and decible levels, and 5u+ allows for 120mm fans more commonly than smaller chassis
140mm fans are preferred but not as commonly implemented in home server chassis/racks

this is the perspective of a budget-minded tinkerer, not that of a high-budget businessman

that's a big server

for you

would it hurt if I took it apart?

If you don't have a server rack just buy an eatx board and a regular full tower. It'll do fine and be cheaper and lighter than a steel or aluminum chassis. You really only need a full length server chassis if you plan on having a fuckload of hotswappable disks, or if you plan on getting more servers in the future.

it would be extremely performant...

>>>/reddit/

4U

Get this hothead outta here!

The best server is the one that isn't a whitebox.

underrated post

What country do you live in "performant" is a word?

People who try to stop linguistic change fight many battles and, in the long run, win none of them.

English speaking ones, Pajeet.

fairly sound advice user.

the only time you need a good rack (pic related) is a small/medium business and up. exception as user stated is where you may want to fill a server with disks a la backblaze or some other SAN type setup.

most home users/small business can easily get by on off the shelf vanilla boxes or components. the main benefit to this is low running cost over the years, you try replacing a server grade motherboard for less than $100.

i was a network/server admin and personally would never have commercial server stuff at home now as its just too noisy, expensive to run and takes up space.

for the home, small and quiet rules.

>the only time you need a good rack (pic related)
you have good taste

>you try replacing a server grade motherboard for less than $100.
it's not that bad if you buy a few generations behind. there's some really nice LGA1366 stuff going for peanuts these days

>i was a network/server admin and personally would never have commercial server stuff at home now as its just too noisy, expensive to run and takes up space.
>for the home, small and quiet rules.
well you can always trade off size and noise. if you're willing to take up 4u of rack space you can build a pretty quiet machine. But you're right that there's not much point in doing that unless you need moar coars, a dozen drives, etc.

>the main benefit to this is low running cost over the years
Haha yea except when your unsupported bullshit breaks down and you have no SLA. Better stock your own parts.

you dont build servers, you buy them

fucking retard

Why would you take all the fun out of it?

>best server case size?
Blade (any) > 2U (R730XD) > 8U (DL980G7)

Just use a normal case, fuck racks.

5/5

>Not 1U. For home use 3U/4U is best because they will fit 120mm fans.
No they dont. I have a SuperMicro 743 and it only takes 80mm. I can't think of any SM chassis which takes 120mm. The only time I can recall seeing 120mm fans in a white box chassis was in a Chenbro.

clean taht piece of shit you disgusting excuse for a human being

how the fuck can you like like that?

you fucking disgust me

>i wish i had a workstation

use a god damn normal case, i dont get why people bother getting racks if its for a home server unless youre some idiot that has 10 1 TB HDDs

>lives in filth and squallor
>thinks others want to too

get a hold of your life before it's too late

>i wish I had a bunch of SSDs and HDDs which could get filled with dust

>I'm going to tell others how to live on a Chinese pottery bbs

>unless youre some idiot that has 10 1 TB HDDs
I literally just bought a rackmount case and put 10 3TB drives in it.

guess now I have an excuse to buy a rack to put it in

you better not have bought more than 3 from the same place

Whats the cheapest rack case which can handle 8 gpus? Supermicro ones are really expensive. Is there a cheap chink one somewhere?

Don't buy a cheap case.

>Supermicro ones are really expensive
I'm pretty sure SM doesn't make any chassis which can take 8 double slot GPUs outside of a blade system, and even then it really isnt the same. And if you can afford to buy 8 current gen GPUs im pretty sure you can afford on their chassis. What the fuck is your plan for 8 gpus?

Surely its just a bunch of pci riser cards in a steel box,

Really just lots of big matrix calculations which can be parallelised on a gpu reasonably well. It needs double precision so I was using R9 390s because they are cheaper than teslas.

If they can be parallelized across multiple GPUs why do they have to all be in a single box?

Easier than having several separate towers/ racks and connecting them with gigabit. More suited to a few cpus and more gpus.

>Easier
No it isnt, those 8 cards alone would draw 2kW and require a 208v/240v power due to the 16A max draw you can put on IEC C19/C20 connectors. And few people have spare outlets >200v outlets at home. I've seen high power chassis which use multiple 1U PSUs to supply non-redundant power, but even then in a home environment you'd have to do retarded things like running extension cords to outlets on other circuits in other rooms. This ignores what would have to be done to accommodate the load for AC units.

Before you start this gradios plan of having some uber-OpenCL rig, you ought to consider how you plan on powering and cooling it. Unless you plan on renting rack space in a data center.

those 3 and 4u rack cases are great to build desktop pc's in. They fit in rackmount carrying cases too for mobility.

>Better stock your own parts.
why?
thats the beauty, parts can be had from anywhere at a moments notice.
SLA's cost money and if your on a budget then an SLA will come at the cost of server performance.

Sure if your an enterprise customer then SLA's are useful but for a home server it's a completeand utter waste of money.

the fact is that when a server breaks down it will cost money to repair therefore its your decision if you pay up front or react if/when it breaks.

This is what separates the armchair admin from the datacentre tech. People forget power+cooling until something trips/blows/fire.

Not the commenter you replied to, but I have heard of people getting dedicated wiring for home datacentre use. In britbong land we have 240v at the wall, which is great for home setups.

Yep, at my last job one of my responsibilities was DCIM. I have 208V in my apartment because it has oversized AC units in the bed rooms. I pulled out the AC unit and replaced it with a reasonably sized one, and then plugged my SmartUPS 3000 in to the 208V outlet.

>those 3 and 4u rack cases are great to build desktop pc's in.

4u cases are terrible for 90% of builds. Even at 4u, they're too thin for most double-radiator water coolers or even a hyper 212. I've found that 4u is only decent at Xeon builds with stock coolers.

That's fucking disgusting.

If you're doing storage, 2u and 2.5" got swap bays. You can get a ton of storage in there.

If your doing GP-GPU with, 3u or 4u.

If you're a cheap ass and want the cheapest thing you can work on with parts you have (not just buying the right commpoents for the build): 3u. They're dirt cheap folded, sharpened, sheet metal, but bit so big that the poor construction makes them wobble every time you move them.

Nicer 1u and 2u cases cost a lot and come with great airflow, fans, layout, etc.

jelly

Not really. I have a dual G34 system and Xeon E5 system in storage. Deciding whether to sell them or not as I use them for fuck all. Pointless power sinks.

I might keep the Xeon system since it's got decent audio for an audio workstation.

J E L L Y
E
L
L
Y

You really need to clean that shit and get over the fact people aren't impressed and have better and cleaner systems.

What's the chips in the Opteron and Xeon systems?

This movie was fun.

>What's the chips in the Opteron and Xeon systems?

The Opteron has a pair of 6220s. The Xeon I have two chips. An E5-1620 and E5-2660. The 1620 has a higher clock rate, but its a 130w chip. The 2660 has MOAR CORES! but is clocked lower and is only 95w.

The Opterons, from memory have a TDP of 115w or something, but usually they're around 80w during everyday use.

I moved into my own flat, so most of my computer stuff is still stuck in storage at my mums house. I'm going to have to hurry up and either sell it or move it in here before she decides to toss the lot in the bin.

The Xeon system is only a single chip board too.

Not OP here, but also looking to pick up a homelab server (something for RHCSA studying/virtualization/dicking around with OpenStack)

What about tower servers? I live in a small apartment and don't want to deal with the noise and energy bill of rackmount. I hear used Dell T20's are good price/value, but I also see a lot of people picking up a couple used Xeon E5-2670's cheap from Ebay and building around that. Thoughts?

>What about tower servers? I live in a small apartment and don't want to deal with the noise and energy bill of rackmount. I hear used Dell T20's are good price/value, but I also see a lot of people picking up a couple used Xeon E5-2670's cheap from Ebay and building around that. Thoughts?

Here. I built my systems in a Lian Li PC-A17B. It's 5.25 inch bays right down the front. So you can remove the face plates and put in hot swap caddies, like the ICY Box 4 in 3 or 5 in 4. I had 4x4 in 3 ICY Docks in mine with 16x2TB drives. For no other reason than because I could afford it and I wasn't paying the electric bill. I used it for FreeBSD and ZFS and for storing all my movies (200+ DVDs dd'd to *.isos and nearly 300 CD albums ripped to flac. It barely made a dent in the storage).

It sounded like a small nuclear reactor and produced as much heat. When all the drives were working, the vibrations could be heard on the other side of my mums house.

I bought all my components from tech liquidators on eBay. I think I got the Xeons from a seller called ViralVPS. All the rest came from sellers rdc-outlet and network-servers.

Thanks m8

>Xeon E5-2670's cheap from Ebay and building around that.
They're not cheap.

Is OpenStack for clustering?

I'm looking to expand my storage capabilities, especially in terms of making sure that a drive failure won't cripple me, but can't decide what to settle on. Currently own 2x 1TB drives and one 2TB drive, with some redundant backup spread over them.

I'm considering a RAID5 using 3x 2TB drives, but it seems like such option is not supported on Windows 7. Does that mean I would need to buy a raid controller card or something of the sort? Any recommendations for such, especially in any reasonable price range?

>They're not cheap.
You can get an E5-2670 from £70 in the UK on eBay right now just doing a quick check. And a pair for £135. They were over a grand new.

Depends in what you need it for. Blades for maximum compute density. 1U for maximum compute density on a power budget. 2U for compute with up to 8 drives. 3U for up to 16 drives. 4U for up to 24 drives.

Scan.co.uk are still selling them and they're £1236 quid new.

They're selling for ~$70-$80 a piece on Ebay, that's pretty cheap. Dual-socket mobos are a bit pricier though, nothing under $300 that I can find:

blog.brianmoses.net/2016/07/building-a-homelab-server.html

OpenStack is basically an open source version of AWS (to explain it really shittily). You can use it for clustering in the sense that you can just add more servers for more computer/storage.

Upgrade to Windows 10 or build a FreeNAS box.

Forgot to say... if you do build a system and plan to go headless, make sure you get a board with IPMI. My Opteron board has it and it's great, my Xeon board doesn't and it fucking sucks.

That way if you go headless and need to do shit in BIOS or real low level you can do it remotely instead of having to attach a monitor, keyboard and mouse just to change a BIOS setting every time.

Definitely gonna watch out for that. I hear almost all mobos have Intel AMT support which ostensibly functions like IPMI, but I've never actually seen it in use

>2660 has MOAR CORES! but is clocked lower and is only 95w.
Ehh. All my servers at work have 40 cores. I'm pretty far over the feeling more cores the better. You still have so much of a bottleneck if cores need RAM on a different memory controller, that you end up needing to pin processes for optimal performance. AMD is incapable, and Intel has done fuckall to work on the issue, and everyone just relies on the kernel scheduler and more clever compilers to make it bearable. More cores are useless for big work, until they make it stop emulating 90s numa architecture on the inside.

>hey're selling for ~$70-$80 a piece on Ebay, that's pretty cheap
And then you need a board. And then you need a SAS controller, a backplane, NIC because most Supermicro boards only come with 2, a case, etc. Whiteboxes aren't worth it when you consider how old the 2670 is.

If you can put something comparable together for the same price range of the build I linked I'd love to see it.

For 1k you can probably find an R720 or DL380 G8 with about the same specs, if not better.