Homelabs

Any of you have a homelab or server grade hardware? Single host, hyper-convergence, virtualized, bare metal, Linux, Microshit, BSD, whatever.

Other urls found in this thread:

amazon.co.uk/gp/product/B013UBCHVU/
twitter.com/AnonBabble

>Windows 10
OUT

Anyway what do you do with your homelab?

Pic in OP isn't mine. Running a type 1 hypervisor with a bunch of virtual machines for things like file servers, web servers, build server (so I don't have to compile shit on my laptop).

You should start a homelab general to get more people to make homelabs. I always wanted to get into that community but the only one I ever found was /r/homelab and they're all retarded

Yes I have a 3-host vmware esxi environment with synology storage as the backend, hosting about 20-30 VMs. (It could easily host a lot more though). I have probably 10 vlans for various things. I have full HA, DRS, etc. because I use pirated vmware keys. (Meaning, the VMs move around automatically to resource-balance the cluster, and if a server dies the affected VMs automatically boot on one of the still-alive hosts. I can take a server down for maintenance and all the VMs will move to another server via vmotion, etc.)

The servers are cheap shuttle pcs with about 32gb ram each and i5s, they perform really well and didnt cost much at all. They dont have drives, but they each have at least 4 1GB ethernet nics, two of which are bonded for the storage backend (running both iscsi and nfs). Total cost for all three shuttles-as-servers was probably around 800

The synology backend, well, cant say enough good things about synology. The only drawback is price but IMO storage is one of those things thats worthwhile because of the extreme timesink when you have an issues or lose data. Cost of the syno was about 800 plus about 300 for drives providing a usable 3TB of very high performance storage (thanks to an SSD cache).

+1 for homelab general

>ethernet nics, two of which are bonded for the storage backend (running both iscsi and nfs)
can you elaborate on how this works plz?

I'm waiting for the day when I come home and 700lbs of server equipment fell through the floor into the apartment below me.

Watcha got there, sport?
t.noob

From bottom to top:

HP C7000 bladesystem with 4 half height blades and 2 full height, Tripp Lite 1350W UPS, IBM x3650 M3, IBM x3650 M3, nothing special self built 2U running pfsense, Bluecoat ps7500 packetshaper, Cisco 4503-E switch. I also just added a HP DL360e G8 to run FreeNAS on that's not in the picture.

Im talking about link aggregation (sometimes abbreviated as LAGG). It lets you use two ethernet ports and "bond" them together as one logical connection with double the throughput of a single port. Both sides of the connection (switch and host/device) have to support LAGG and have it enabled.

If I used one 1Gb nic to connect to the syno storage, all the running VMs on that ESXi can read/write data to their hard drives at only a total rate of 1Gbps (minus some overhead). They all share the 1Gbps. Multiple VMs doing a lot of IO simultaneously could saturate that and performance would suffer. So I take 2 1Gb nics and configure them in both ESX and in my switch to use LAGG, raising that to 2Gbps instead. I have a cheap $50 HP switch that supports that. (Pretty much any full-featured/managed switch will do LAGG in my experience)

Hope this is understandable

i'm just starting, i don't even have a rack cabinet yet but i got myself an hp dl360 g7 which i plan on using as a pfsense router. looking for a cheap gigabit switch (don't need poe).

nortel baystack seems like an ideal candidate but i hear they are loud as fuck

okay that makes sense.

so your setup (or one similar to it) looks like kinda like this?:
internet ---> router (pfsense)
|
V
(managed) switch
| - | | - |
V V V V
VM host (ESXi) NAS

As long you have another room to put all the stuff it's not that bad. My bedroom is next to the room my server rack is in but I can't hear it at night. Only downside is that I leave the door closed 24/7 it can get a little warm in the room during the summer time.

so what exactly do you do with all this shit anyway?

hack the planet while riding skateboards

Home Lab, mostly used for malware replication / re, honeypotting, etc.

Viridian - DL380 G5 (2x Xeon 5440 / 64GB / 8x 300GB SAS)

ESXi 5.5 via USB.
10 VM's, all Server 2008 R2. 2 DC's, pfsense, Exchange 2010, Lync 2010, SharePoint 2010, SQL 2008 R2, 3 Win 7 clients

All legacy testing. I despise this era of software. Windows 7 was only good because we didn't know any better...

Threshold - Dell PowerEdge T620 (2x Xeon 2660 v1 / 160GB / 4x 250GB SSD / 12x 2TB, 2x GTX960)

Server 2016 Data Center, because AVMA and nested Hyper-V. 4 DC's, Exchange 2016, SharePoint 2016, SCCM / SCOM, TFS, 3 win 10 VM's for VDI testing.

And two nested vhosts for storage spaces direct and automation testing, each 6 core / 48GB / 4x 200GB.

To finish out the rest of the threshold network, the firewall is a PCEngines APU1D4 running Untangle 12x and a home subscription, because fuck ad's and marketing. Switch is a Dell X1018P, wireless is a EAP1750h, and phones are Polycom CX600's.


Viridian and Threshold are two different environments, on two different public IP's. The Comcast business gateway (fuck Comcast, but it was literally my only option out here) is the only hardware the environments have in common.

HP dl380 g6 with 2xL5630 and 32gb ram. Running xenserver with a few linux/unix VMs.
MSA60 for DAS storage 6x3TB.
Which hypervisor is best and why? I've used ESXi and xenserver mostly

A small and old Optiplex, currently for django+postgres projects, SFTP and irssi.

ESXi is the industry standard for a reason.

>Which hypervisor is best and why?
It depends on what you're doing.

For me, it's Hyper-V, because of Server 2012 / 2016's AVMA (Automatic Virtual Machine Activation).

If I weren't using Windows, and didn't want to deal with commercial license, I might go Xen or some other option.

ESX is good if you have a mix of Windows and non-windows VM's, or if you need to MacOS as a VM (even on non-Apple hardware)

If what you have works, stick with it.

What the point of hoarding old server grage hardware?

I wouldn't say ESXi was "Standard". I've seen many hypervisors used. ESXi might be popular but popularity doesn't define how good something is. Is it faster or have some unique features?

Working on it right now. Just don't mind the applel labeled static bags since they were the smallest one laying around that I had.

So, I've seen most people use used server or workstation as their homelab, anyone use their old laptop as a cheap homelab?

My current home media server is an old acer aspire one

I have a few different boxes although I've been trying to condense and virtualize all that I can.
I've narrowed it down to the following;
Upgraded HP Z400 as my main VM test box, HTPC, and soon to be main desktop once the 1080ti comes out.
E5645 Xeon
48GB DDR3
550ti

Thinkpad T540 Used as VM host for log server and DNS server, Waifu2X upscaler, Austismcraft server,
i5 5200U
12GB DDR3
940m
This machine is missing the screen due to an autistic executive at work so I repurposed what was a dead machine into a server.

HP 8000 used as a seedbox. Working on a way to replace this with a T420 that is missing a keyboard.
Q8400
8B DDR3

A custom NAS that is in the process of being replaced.
Q8400
4GB DDR2
12TB Storage

My gateway device is a WatchGuard M200.

I'm currently building a NAS that will house twenty either 4TB or 8TB drives. I've bought the case but that is about it thus far for that project.

This thread is incredibly inspiring.

To those with home labs, what is your reasoning behind creating and maintaining them? Are you a sysadmin or application developer?

What is managing those VMs famalam?

Pretty close except my pfsense is actually one of the VMs. And the two nics are just for a fast connection to the storage backend. The network connections the VMs use are over a seperate nice

In the pic the blue lines are on the storage VLAN. The other lines are used for regular network purposes for the VMs etc.

Its actually much more complex than this because I'm always playing around. I have a tor-isolated subnet, I have a span port that lets me copy all the FIOS traffic and send it to an eye-candy visualizer of everything happening to/from the Internet, an extra pfsense with a wifi adapter usb-passthroughed configured to activate if my fios goes down and then mooch wifi from my neighbors, etc. Happy to talk about any of it.

Vmware ESXi or Hyper-v. Dont use xenserver. Citrix is moving away from it, it'll only fade away over time.

Vmware is more powerful in my opinion and keygens are easy to find so you can be "fully-licensed" with all the most advanced features activated for $0.

Vmware has tended to lead the pack with awesome features. Their newest really awesome feature is cslled vsan, it lets you have a fully redundant cluster without any shared-storage backend at all. Just a bunch of computers with HDDs can become a high availability cluster. Think: you have three servers each with one hard drive, and the effect is as if you had RAID working across the three drives -- even though the drives are in seperate machines!

Not saying vmware is the best choice for everyone. It can be complex for sure. What is your reason for building a virtualized lab:
To run VMs to experiment with? Anything will do, really. Pick something simple.
To run VMs AND experiment/practice with the underlying infrastructure itself? My choice for that is vmware. Complex but you will learn a lot. Just my opinion.

> vsan
Its like the gay variant of Ceph (or worse).

>homosexual storage technology

Can you give us an idea of how ceph works and what its good for? I dont know much about it, but my first impression is that it doesnt seem great for home lab vm storage unless your goal is to learn/practice on storage subsystems specifically or build your own custom hypervisor-layer builds.

thinking about getting a server for nextcloud would this work?
amazon.co.uk/gp/product/B013UBCHVU/

Not he isolating entry points is recipe for disaster.

>``homelab''
What a retarded name.
Of course I have a server in my house. It's a webserver and I also ssh into it when I don't have access to a Unix machine sometimes. It runs OpenBSD 6.0

Is this what NSA employes do in their free time? Figures.

Dont understand what you are saying, explain?

New to servers is there something that can easily allow me to create vms and remote desktop into them?

Literally anything, VirtualBox/VMWare Workstation if you just want something on your desktop

>Zoom
>Enhance
>Ubuntu
Get fucked m8

He means virtualizing a firewall is risky.