/hlb/ - Homelab/Homeserver general

OP is looking at getting a home server and so wants a general thread about them!

Other urls found in this thread:

helmer.sfe.se/
serverfault.com/questions/770472/mixing-disks-of-different-sizes-in-a-storage-spaces-pool
docs.ceph.com/docs/jewel/start/hardware-recommendations/
twitter.com/NSFWRedditImage

Save up and buy a proper rack, for a start.

Poor Thinkpad.

Image if an imageboard was filled with generals made by retards that can't think for themselves.

got a pretty much endless supply of stuff from work that has been decommissioned, just have to get the energy to pick out what i want to take

What is this?

Get a rack, 24U if you can find it. You really won't need more than two servers assuming they're X5500 xeons or newer. Keep power consumption in mind since they'll be running 24x7 (not sure what the newer ones run but the old Poweredge 2950s were a constant 300w). If you've got the money to build from scratch, build a couple using 2U chassis and the Xeon D-1500 series processors.

Acts as a storage server, also runs stuff in KVM.

Thinking of upgrading to a 2P board but I'd have to redo cooling and it might be a tight fit in the case.

trade?

Is a raspberry pi (or odroid) connected to an external drive less power hungry than a normal computer?

Is it even a good idea?

I've been thinking of getting one when I get a house. Is it okay to just throw it in a closet?

Yes.

No.

If I were him I'd just put some actual rack mounting bosses in there and be done.

Thread reminder that if you own a home server, you're compensating for a small dick.

Bunch of garbage

Whats the benefit of having a private server if you already have obscene amounts of hard drive space on your computer and its your only device?

Why not?

I am doing a torrent server and assuming will work enough to use a minimum internet and the speed of the external drive.

Home server's are great. Mine is currently a WHS 2011 box running an old Opteron-170 /4GB ram combo pushing little over 8TB of storage. Used for client pc backups as well as media streaming/central file storage.

Some point down the road I'm gonna upgrade the cpu and ram.

Bought a hp microserver a few months ago. Not a bad investment. Run a nextcloud, plex, unifi and other various things on it.

Is Banana Pi M1 good for cheap torrent box?

>Cortex-A7 Dual-core
>native SATA
>native Gigabit Ethernet

less power comsumption and having a second computer at hand, which I regret not thinking on earlier

You could try for a dust-free environment, for a start. Then move on to temperature control.
Stacking the UPSs under the servers is not a good idea.

Sounds good on paper

>Stacking the UPSs under the servers is not a good idea.
Because?

What hapens here
here
and here
Besides wasting alot of electricity ?

I pay my own electricity bill so I don't care.

+1

Guys Comcast is changing over my apartment complex from a central modem to our own modems in the living room. Then they're giving each roommate a smaller device that seems to be hooked up to our cable hookup.

What the fuck is this? I'm scared it'll affect my homelab shenanigans. Apparently they gave us public IPV6 addresses too. Wtf

I'm also really drunk so sorry if this doesn't make sense.

No answer.

>Stacking the UPSs under the servers is not a good idea.
UPS's go to the bottom of the rack because weight loading.

Heaviest items always go to the bottom of the rack.

>Heaviest items always go to the bottom of the rack.
no shit. and that stack of R710s is a fuck load heavier than those tiny UPSes

...

I'm in love.

That's not OPs picture. It was a random one because I want a home server. It's going to need to be low power though. I have no space or use for a full rack.

Not putting them under the servers is not about weight. It's about heat. It's about EMI. It's about safety.

What should I do with pic related?

I got it for free, it's not rackmount (it's in a mid tower server case) and the mobo has dual NICs.

I know the Xeon 3075 is just a C2D so I'm thinking of buying cheap HGST refurb datacenter drives and turning it into a NAS, or throwing a cheap $35 R7 240 in it so my roommate can not have to play Skyrim on his PS3 anymore, the poor bastard.

An Xserve G4 and a G4 Cluster Node. Arcane indeed.

NAS for sure.

I've heard someone mention custom home rack servers. Anyone know about these?

Shared bus would be a concern on the Pi's imo. I'd look at some of the better SB's out there that are more powerful than the Pi line. They might not suffer the same issues besides a smaller community.

Rpi 2 here running Deluge
No issues with the shared bus. The only minor thing is the 100Mbit NIC.
Ofcourse if you can get a better SBC, then take that one. But in general there's nothing wrong with the Pi

For torrents and file serving, a Pi is plenty. If you're transferring large amounts of data, then it will be a little slow over network speed.

Valid points. If you're a casual home user it's plenty fine.

Yes. It's what I'm doing for now.

But when I move, I want to get a proper home server going.

I want to build a home server cluster but I have no experience with this. What should I learn to do it effectively?

My goal is to have a system that lets me randomly add non-volatile memory of any kind at any time. Drives of any type, manufacturer, model, capacity and speed. I just insert it into the server and the system starts using it.

My research led me to CEPH object storage but I just don't know how to manage this. What hardware do I need to do this? It seems I need at least 3 servers. Can I virtualize them? There are so many blanks in my knowledge, I'm not confident in doing anything

I also want to build a high-performance LAN but I just don't know what kind of hardware I need to look for. When I think top-of-the-line networking I think fiber optics

stuff

300w constant is only about $5 a month

>all this autism ITT
Holy fuck.

All you ever need for your autismal home server needs is a HP thin client. Deal with it.

You see that DOM ssd? Those are not made to be written to a bunch.
Had some for work, left the write lock off on the win7 embedded, and it killed the ssd In a couple months

of course not, it's just a boot device.
>win7 embedded
>stock
found your problem. Reinstall WES7 with your own template, so it will be exactly same as Win7 Ultimate
>and it killed the ssd In a couple months
Depends on what SSD. T610 and newer thin clients got DOM ssds with TRIM and wear leveling, plus you need to optimize your system properly to NOT kill it by excessive amount of writes.
and if you're so scared of DOM SSD dying you can replace it with any half-sata SSD, including half-sata to msata adapter plus any msata ssd (preferably intel)

>Thin Clients , as servers

If we're going on this Autism you're going to say an actual HP Microserver is overkill for anything.

because it is, unless you're talking about NAS rather than actual home server.

No you fuck, you don't go fuck around with a work pc like that.

And literally that specific DOM, the Apacer ones. Those are shit.

I've used mine with debian installed on it, 24/7 for about 8 months and it was fine. I guess you're just retarded and either you got it used (and already worn) or you haven't disabled swap, hibfile and other shit.
Kill yourself.

What about NAS then

helmer.sfe.se/

...

user, if you dont know when object storage should be used, you shouldnt even try.

>You see that DOM ssd? Those are not made to be written to a bunch.
If you have one that isnt shit they are. My SuperMicro SATA DOMs are rated for one entire drives worth of writes per day.

you're a tard who doesnt even know what thin clients are for

thin clients are for whatever you want them to be
faggot

I want a few 1u servers, a switch, and a rack. Im thinking about making it happen next month. The only problem is I dont have anywhere to put it except my closet pretty much. I know if i keep it in there it will get hot as fuck. wat do?

Then what should they be used for? From I read it's exactly what I want. If not object storage, then what do I actually want?

Object storage is far large scale out installations. I used to work for a object storage vendor, and our entry level deployments generally consisted of 15 servers with 10GbE. And it wasnt just a insert any shitty drive you want affair either.

>wat do?
Move out of your mother's basement

I'm not sure what technology I need to use if not this. Can you think of any?

ayy

It's Theo de Raadt's basement

It may just be me, but I would run DHCP/DNS/maybe WSUS all on one VM.
None of those really require alot of resources and can easily be run together

Any reason why a RAID (hard or soft) wouldn't work? I know Windows Storage Spaces allows you to add disks of mixed sizes to the same pool and still use all their capacity. And then throw ReFS on top for protection from bit rot. It needs to be done in pairs though

serverfault.com/questions/770472/mixing-disks-of-different-sizes-in-a-storage-spaces-pool

ZFS probably has something similar.

Do you have a use case for a cluster? And the hardware to support it. I have a few Windows fail over clusters on my ESXi box at home, and a SQL AlwaysOn availability group. This is just mainly because I can since I have 160GB RAM in the box. The downtime incurred from reboots for updates isnt that bad for a home use case.

>300w constant is only about $5 a month
Where the fuck do you live where electricity is $0.02 kW/h. Because here in Chicago it is $0.12

>uptime
>1m
vm?

I need to buy a real rack

>i need to buy a rack
>even though I dont have any rack mountable equipment

I have some, and others I'll get next
But they're not in the picture

>I have some
>Even though what I bothered to post a picture of is a bunch of shit boxes, some old enough that they still have dual floppy drives

As opposed to a fake rack?

I still use my 17 year old computer for sentimental reasons. There's only one floppy drive, the other is a dummy cover.
Why would I buy a rack to put nothing in it?
You seem unusually triggered.

19" rack.

>the only rack mount equipment I have is a tape deck
>>>/hipster/

no pure hardware

Install an AC unit in the closet.

>he has no NAS with a custom made ultra-silent case
What's you're excuse?

Windows?

Based on what I studied RAID/filesystem-based solutions seem too rigid. If I have two disks, one 1 TB and other 2 TB, the system considers both drives to be 1 TB for mirroring/stripping purposes. If I buy more drives as money allows, it seems I can't just add them to the system. I'd have to make another disk array configuration, resulting in another separate mount point. I really want everything under a single namespace; if I had to track where the files are then this whole thing is pointless since I'm already doing this with multiple external drives and its annoying plus they're offline storage and a major pain in the ass to manage.

CEPH seems to fix all this. Single namespace for all objects no matter which node they're on; write data and it spreads all over the system; I can configure the redundacy level per object rather than per disk array; if I have a server I can just slot in a new disk and CEPH will rebalance the whole thing to take advantage of it; run out of ports? setup new server and network it...

My use case is essentially data hoarding. Music, movies, etc. I'd also like to make that data available through my LAN. Ideally I'd be able to click on the file and it gets streamed to the computer. Worst case it just copies the data to a local cache for consumption.

Well the technology itself is quite interesting, I learned a lot trying to figure this stuff out

> custom made ultra-silent case
> literally a few wood pieces over a prebuilt nas

The foam inside prevents vibration caused by em HDD's though, which caused the most noise

>resulting in another separate mount point. I really want everything under a single namespace
Assuming it is Windows, you do realize that you can mount disks to a directory rather than a drive letter?

>. If I buy more drives as money allows, it seems I can't just add them to the system.
You can, they just need to be bought in pairs.

>CEPH seems to fix all this
And will introduce its own set of problems, such as performance. Everything is ultimately being done over HTTP.

>My use case is essentially data hoarding.
No, it sounds like your use case is poorfagging it as much as possible if you cant buy disks in pairs.

I always see these hp boxes on offer. Can you give us any more pro / cons on them(

Its a low end desktop computer with a BMC and 4 hot swap bays. They're pretty pointless when you can just white box the same thing and actually have expandability.

>performance

How much of a performance hit is it really? Is it literally unusable over a network?

The system just needs to store the data, provide a single identifier for it and be easy to expand capacity over time. I'm fine with just downloading the files over to the computers if that's what ends up happening.

What other problems might arise?

>poorfag

Well I'm not a company, I'm just a student. I have a job and make some money but it's not that much. I can't buy a lot of hardware all that once.

It's cheap as fuck and you can swap the cpu with a xeon if you are crazy.

The model with Celeron is less than 200 euros, which is cheaper than whatever you can build from equivalent parts yourself.
Considering it has two 1Gb lan ports, iLO, and four 3,5" bays, it's a great deal for a home fileserver.

Performance for storing your music collection really won't matter.
>listening to a guy posting an internet explorer screenshot

Still, I recommend buying similar capacity drives. Better to wait for the money than create overly complex setups.

>The system just needs to store the data, provide a single identifier for it and be easy to expand capacity over time
You've just described your current configuration with SMB.

> Is it literally unusable over a network?
No, it just has higher latency. Again, object storage is for large scale out installations. Something you arent going to have at home.

>I can't buy a lot of hardware all that once.
Buying 2 hard drives at once isnt "a lot".

>I wish I had that disk subsystem
>the post

but everything about it is retarded

>mount disks to a directory

This the thing I want to avoid. I don't want to dedicate a whole disk to a single directory. If I mount a 1 TB array on /music, then what happens if I need more space? Do I buy more disks, make a new array and mount it on /music2? Do I make a new, larger array, mount that on /music and copy the old files over?

I don't really want mount points to exist. To me it makes much more sense to just hand the music over and let the system figure out where it wants to place the data. This way I can just add a bunch of random disks and storage will be expanded, references to data stay the same and data is automatically moved in the rebalancing process, and only if needed.

btrfs can do that
or lvm + a filesystem on top
obviously, you won't have any redundancy if you go that way
you'll learn the hard way

man, don't just put it over a carpet, that's shit.

Seems CEPH allows specifying replication at the object level, so I can tell it to make more redundant copies of something important rather than a whole drive.

I just don't see how btrfs or lvm would help me when I run out of sata ports. Ceph covers that by just letting me add another computer

Nothing beats object storage if you have compressed stuff like music, pictures, videos, archives that can't be modified inplace and pretty much everything else that is infrequently modified.

If you're doing frequent small writes reads like a database then a local SSD/HDD with RAID is the only option.

>I don't really want mount points to exist
You clearly dont understand how file systems work.

> then what happens if I need more space?
Create a storage space for that volume and expand it.

>blah blah blah i want magic and unicorns
did you even read the system requirements?
docs.ceph.com/docs/jewel/start/hardware-recommendations/

If you cant afford to buy disks in pairs, you can't afford to run a object storage system.

>SATA ports
>Not SAS
for fucks sake you spurg, look in to what SAS expanders are

>inb4 SATA port multipliers
they're always shit and a buggy mess

btrfs can do it too, would be simpler and faster desu
scrub etc. is nice
But still, with say 1TB + 2 TB, you'll have at most 1 TB of data that is safe.

> when I run out of sata ports
Assuming you do, there are cheap PCI-E cards.
I have a computer with 9 drives and one with 13.

> add another computer
Aren't you trying to save money?

>AIO
>no cooling on the northbridge at all
WEW
LAAD
The I can't even count the number of dead X58 motherboards brought into my old shop that was caused by heat.