Home server general - /hsg/

home server general - /hsg/

comfy behind-the-sofa homeserver edition!
+ run your own DNS server edition: zwischenzugs.com/2018/01/26/how-and-why-i-run-my-own-dns-servers/
+ RISCV Homeservers NOW
Are you interested in learning Linux or BSD administration and configuration better. Becoming a systemd expert? Or maybe you hate that shit and want a cozy little BSD machine to run services on and interact with. Or practice more advanced and complicated networking setups.

>news:
> LKML is hosted on somebodys homeserver!
> Everybody is switching away from freebsd, nobody knows why

>chat
> discord.gg/9vZzCYz
> or use riot.im and join riot.im/app/#/room/#homeservergeneral:matrix.org

youtube.com/watch?v=Del1GNuODL0

Attached: 14b5bf8de623aef0e34dc3fd81c0228c4bdc43be194c58855c503579806acb69.jpg (1600x1132, 1021K)

For a second I thought I'm /cgl/ and now it's a campaign to shame girls who don't dress like thots in winter. But it's just Sup Forums and Japanese art, so it's just hating females as usual.

you're projecting hate onto this, it's just cute

>switching away from freebsd
I happen to know a guy who had not installed a new distro since 2009, he is using Source Mage.

This things is though to break, good for your home servers. I recommend.

Triggered roastie detected

Attached: 1515101080306.jpg (2048x1536, 233K)

pls be my belly gf

>Windows server
What kind of proprietary sheep do they take me for?

Anyone know how much storage you need before using ZFS is worth it? Also how is the Linux implementation?

Attached: 1520489578878-g.jpg (1200x1600, 339K)

Home servers here all run FreeBSD.
Why would one switch to a obscure linux?

Does FreeNAS require server-grade hardware, or does the forum like to blow smoke up their asses?

Reliability, is no longer an obscure distro by the way, I've seen it mentioned far often on the Internet to have memes.

FreeBSD is stable, zfs intergrated in the OS.
Build server is easy to setup and every build is done in a new jail.
Why would Source Mage be more reliable than FreeBSD ?

ZFS (and FreeNAS by extension) wants server hardware, especially ECC RAM, if you can get it. But if you can't and run it on consumer stuff, it's still much better than nothing. You'll run into a lot of sanctimonious pissants who'll insist otherwise though, and if you can't have ECC or whatever then everything's pointless and you're going to lose all your data because you obviously don't care enough about it to spend a thousand dollars like they did. Ignore these people as much as you can and make do with the best hardware you can get.

ZFS for a home server is a meme. I am sure many idiots thought it was a good idea to use something specifically used for database files.

...what?

Attached: elderly_waiting_to_b.jpg (576x768, 54K)

Hostile takeover initiated by SJWs diverging foundation resources to things like hiring a diversity consultant will in the long term degrade the quality of the project.

Besides thanks to the OpenZFS project the code is shared between all platforms now and ZFS on Linux is pretty much the same with the added bonus of wider driver and technology support.

>Paying an extra $10 for your RAM makes you sanctimonious
And this is why I call you all poorfags

Attached: Why I call people retards and poorfags.png (2026x1916, 558K)

Data integrity is very important when you start going 50+TiB

>I dont understand what UREs are
>I dont understand why this matters with array rebuilds

So? One more reason to use ZFS for its checksuming and self healing capabilities.

Wtf. A girl who bundles up is cuter.

If she's still wearing a skirt instead of actual pants, it's hard to say she's really prioritizing warmth.

What is a meme about it ?
Works fine here.

>being this mad

You can run FreeNAS fine without server grade hardware.
ECC is always nice to have

I want to unbundle right until we're both naked under the kotatsu.

but she's cuter all bundled up like that

The tights and boots would keep her warm.

Has anyone used Plinth? I installed and ran it in Debian 9, and it says it's listening on localhost:port, but when I open my browser to that it's just a blank page.

Sup Forumsentlemen, I need something to use as archive and 24h torrent but i'm a fucking noob. What to buy for 300 euros (excluding hdd)? at least 4 bays...

>not staying cold to burn more calories
No wonder white chicks are so fat.

But you are better using EXT4 if you have a big amount of files in a big amount of directories

Journaling was implemented since EXT3

>using zfs in a system without files of more than 50 gigs
Why is Sup Forums so plebeian?

Because many other filesystems don't have snapshot support.
I had pretty much add as many disks as i like without having to deal with all kinds of fuckery

it gets worse

>using zfs when you can't even afford ecc ram

>tfw fell for the ZFS meme and now want to switch from SoyBSD to DragonflyBSD but have to figure out how to move 10TB of data before doing so.

Attached: zfs.png (430x258, 13K)

Why use DragonflyBSD ?
HAMMER2 will be ready in few years probably for production.

So there's more RAID56 fixes going into btrfs in 4.16. When's it finally gonna be honest-to-god 100% fixed?

>give up on that shit user, it'll never be ready
it's more likely than ever seeing block pointer rewrite in ZFS. My pool's getting full and I want off this immutable-vdev ride.

> tfw fell for the ZFS meme
Well, it's about what you get on BSD.
> have to figure out how to move 10TB of data before doing so
Grab 10TB drive, rsync -ac to it.

> When's it finally gonna be honest-to-god 100% fixed?
How would anyone know? If they knew, it'd probably get fixed.

OTOH few are holding their breath for in-filesystem RAID. mdadm's RAID5/6 is stable and performs well, it's what you'd generally use.

> My pool's getting full and I want off this immutable-vdev ride.
mdadm. Or if you can take some instability & complexity: Lizardfs/moosefs, ceph, glusterfs, ... there are quite many that can do rather flexible combinations of erasure coding & object / file based replication. And if you already had a machine suitable for zfs, it's probably not like the increased hardware requirements over mdadm + filesystem unexpectedly hit you.

but muh checksums

> What to buy for 300 euros (excluding hdd)?
Onboard Intel, like some J1800, J3455. Cheap, & asrock for example has some boards.

You can power these off a Chinese PicoPSU clone off Aliexpress / eBay with good power efficiency. Or you can get a decent PSU for low wattage use.

Should still leave budget for a case with HDD trays or a few SATA hotswap bays (stock on the case or added into 5.25" slots) or whatever.

Alternatively, build two Odroid XU4 cloudshell setups for two drives each, or stack four ODroid HC2 [might be better in case you're intending to use a distributed filesystem and these are the storage nodes, you can get 4 GBE throughput and independent failures that way].

Included in Lizardfs/moosefs, ceph, glusterfs, they usually use hashes to address and verify files.

Mdadm RAID6 also can serve a checksumming function at the block level. Scrub monthly or whatever and anything that got a bit error due to radiation or such should be repaired.

Nothing really stops you from using checksums or erasure codes in any filesystem sitting on top of a mdadm RAID5/6 either. Or par2 files. Or whatever.

RAID5/6 is the last thing you want to use fot production

Yea but now you are using all kinds of stuff that ZFS can do without any hassle.

Only if you put a distributed filesystem on top.

Otherwise SW RAID5/6 is basically the best option at this point. The CPU time to calculate parity is nearly irrelevant & it scales well to a bunch of dozen drives and just works with little admin effort.

Some refurb workstation, they go on ebay all the time.

I fell for the corral meme months ago, and now I want to move back to mainline freenas, but I've got an encrypted pool and I'm nervous as shit about it.

ZFS may have everything in one command line, but it doesn't even allow growing its RAID5/6 arrays by one drive.

It uses more hardware and still performs like shit. Do RAIDZ3 on 20 drives and the array will perform like 3-4 drives on their own or something. Never mind the performance fluctuates randomly, too, and even doing luxury shit like using a really powerful machine and SSD cache/buffer drives and such won't eliminate that.

And features like deduplication require a downright ludicrous amount of RAM.

It's basically only MAYBE easier on small arrays that stay as they are, but I found it very unconvincing otherwise.

mdadm on the other hand works good up to a scale where you are pretty sure that you want to use a distributed cluster filesystem

I fuckin laughed

Haven't had problems with ZFS on huge and small arrays.
You don't need to turn on deduplication i've used 10TB with 3GB of ram without any problems that build is still running.
And for powerful machine thats retarded even my shitty 8 year old dankpad can keep up nicely with 2x 500GB SSD.

I have learnt a lot from this thread. Thanks a lot OP you flaming black faggot - I never learnt anything about servers, all I learnt was that you fuckin suck but that was more than enough for me

>So
It proves you have no idea what you're talking about, and are the perfect example the dunning kruger effect.

> Haven't had problems with ZFS on huge and small arrays.
You probably just didn't care about it performing well, like at ~1/2-3/4 of the individual drive speed if you just put a boring old normal filesystem on top of the drives and wrote to them at the same time.

ZFS is always bad on BSD or Linux from my tests and internet research.

> You don't need to turn on deduplication
Sure. The problem is that just about all fancy ZFS features use too much hardware / are just too slow, and the baseline performance is pretty shit and you can't manage the array as well as on mdadm [never mind the more complex distributed filesystems that are actually very flexible].

So I ended up with "why bother"?

> i've used 10TB with 3GB of ram without any problems
Probably not even with deduplication.

And yea, having a decent desktop to custom built server machine serve 10TB storage with 3GB RAM is pretty silly.

Even the super bloated Ceph now usually does ~1GB RAM per 10TB drive or so, mdadm will do well at like 128MB per 10-16TB drive.

And yet, the ZFS array on some desktop tier machine with 3GB RAM per 10TB will generally still perform worse than a low power Celeron thing, given they both are attached to storage controllers that could run more or less all drives for the same drive count at full speed.

Tests ending up with surprisingly terrible results like that led to me testing multiple setups and researching the topic online, but yea, it's basically just ZFS.

People just throw tons of RAM, fast computers and SSD at it in an attempt to make it work better and I guess many succeed in making it run well enough for their usage, but it's not really sane with relation to what hardware you otherwise need.

My tests are fine, dedup is disabled by default with less than 4GB ram.
For home 3GB is fine since i don't use it constantly.

At work we have multiple 50TB and even 150TB NAS with ZFS yes ram is important there since its being used 24/7 but that's fine for big servers.

Whats so silly about it ?
Even my laptops use ZFS without dedup ofcourse but haven't seen performance decrease in tests i've ran vs regular sw raid

So, Sup Forums, I want to start with a basic setup. What's the most economic setup I can get related to power consumption? I have a Raspberry Pi 3, but the architecture is too limitating.

What are you even trying to do?

See

>just hating females as usual.
>a member of sex A prioritizing features that make them more attractive over their personal comfort or health
By your logic, when I post a random dude bro doing similarly retarded shit, I'd be hating on men.
Fucking *ism-baiters, humankind needs to get rid of you somehow.

New to servers in general here. I want to use it as a media streaming center for 3+ streams at once, mostly Plex but to also back up all my stuff.

I have an NVidia Shield, but NAS to it feels... Lazy. I have 12 TB and counting of media and want to start learning things and be a cool guy.

Do I jump off the deep end and go for a $1k build? Proprietary shit? It'll be handling mostly 1080p stuff on three streams and minor access otherwise.

I agree, we need antiismism!

>WOT IF BEES RAN ON BATT'REES AND LIKE COULD KILL-A-LAD
I'm smelling a sort of a final solution here

I have a VPN, HTTPS and Torrent Client running. My VPN is taking all the processor and memory from it.

nice trips

Attached: nazi-rule-of-new-york.jpg (634x481, 82K)

Is the vpn bandwidth okay? If so you might as well just another pi or maybe a cheap router that you can flash dd-wrt on if you want a dedicated box for the vpn.

What’s with the retarded hate against ZFS in this thread? If you don’t like it do not use it.

Journaling is different from checksuming. With checksums you can know if a block is bad and in case of ZFS it automatically gets replaced.
Another roleplayer with zero arguments.

> I want to use it as a media streaming center for 3+ streams at once
> Do I jump off the deep end and go for a $1k build?
You can do this with a single ODroid HC1/HC2 or such, far from $1k.

> Proprietary shit?
Why would you bother with that type of crap? An easy samba share or upnp dlna streaming thing will do.

EXT4 has checksuming too, using ZFS is retarded.

Well, couldn't you use ZFS as a sort of raid substitute? Ext* can't do that.

Attached: 1298229290281.jpg (640x624, 81K)

Are you the same guy? Dude, if you want to use ZFS just 'cause then by all means, but defending it when even stuff like XFS exist is silly. I personally use JFS on my laptop so I am okay with exotic file systems.

Not really, just came into the thread.
I use a horrible mdadm > luks > ext4 stack for media on my home server and I've been thinking of changing it. But I'd need it to have encryption.

Does XFS or JFS do encryption? Or do I have to do it separately with something like luks?

Dont know about encryption, never done it. Maybe I should jump to that wagon.

can't be helped. dress code is school regulation

> Does XFS or JFS do encryption?
Not AFAIK, and I don't think they should.

Use LUKS, yes.

Why not skirt over pants?

>I use a horrible mdadm > luks > ext4 stack
BTW, this isn't horrible. It's really rather good, unless your CPU has trouble with LUKS somehow.

It used to, but I switched CPUs and now it's fine.
I've been using it for many, many years and just assumed there must be a better way of doing things.

But then again I realy don't need anything more fancy than that for my movies/series/anime/music/whatever.

Better how? Everything involved is pretty efficient, well-tested and stable. Luks has a good bunch of options if you want different cryptography, and if you wanted a few more tricks, you could put LVM in between.

Hiii server magicians, I've recently installed ubuntu server and next cloud thing on my old notebook and this was first linux I ever installed and it even works ! ...somehow, very excited about this I can now share all things across all devices
But I don't know anything about security and this is worrying and I have a question do I need https to connect to this local server if it should not be accessible from outside internet full of hackermans and botnets(one helpful user said that it won't be accessible if I don't forward ports in router, I don't know what port forwarding is so I guess they aren't)? Right now I access that cloud through direct internal ip
Also if it is indeed open to dark forces of the internet, is there any way to know this?
Have a cute cat in advance thanks

Attached: cute cat.jpg (922x1280, 97K)

>do I need https?
If you intend to access it from the internet - for example from an internet cafe(like it's 2001 again) or a mobile device - then yes. You'll need your own domain and cert, which isn't that difficult nowadays with Let's Encrypt.

>I don't know what port forwarding is so I guess they aren't?
Good guess. Port forwarding is necessary because the network in your house is a local one. Meaning, the IP addresses of devices in your local network are not public, or publicly accessible. The only device in your home that has a public IP address is your modem/router, and that is the only thing anyone can try accessing from outside.

If you for example wanted to access port 443 from the internet on your nextcloud device, then you'd have to tell your modem/router to know that any connections made to it's port 443 should be FORWARDED to port 443 of the host that has nextcloud installed.

Thanks
>or a mobile device
Wait hold up does that mean I can't access it from phone or tablet if I don't use https even if the thing is in the same local network? I haven't tried yet though

Nah, it should be fine, unless whatever software you are using on the mobile device has some kind of hard-coded requirement for https, but that would be silly.

https protects you if you're on an open wireless network or over the internet. if you are only using it locally then nobody cares.

rather than getting ssl certs set up some self signed open vpn server/client infrastructure and then everything, even if it doesn't have SSL or TLS built in, is secure.

ITT: unemployed role players criticizing a rock solid stable file system stack used in high grade production Supercluster and ZFS appliances.

I currently have a HP DL380 G6 as my home server, considering switching to a custom Threadripper/Epyc what do you Sup Forumsuys think?

Oh thanks guys

overkill
but if it was me I would use intel because I don't know if amd boards have good iommu groups yet

I've had quite a few files that look fine, proper size etc...but reading them returns zeroes or garbage.
Never again.

zfs on linux or bsd?
I've had no problems with it on freeBSD
I would never use it on linux

I'll take the girl that tries hard to be cute for me

Sorry, it's early. I used to use ext3/4 on my server before and had the aforementioned issues.
With ZFS on Linux (Debian Jessie, now upgraded to Stretch), I've never had issues and its been running for almost 3 years now.
Using a FS without checksums on a server? Never again.

> Everybody is switching away from freebsd, nobody knows why


Suuuuree... nobody knows why :)

bump

If you're not using Proxmox, your server doesn't matter.

Attached: 1520379689325.jpg (600x532, 42K)

You could set up some VPN with OpenVPN or such. Or Wireguard.

Granted, that will still have to be routed to "the outside" internet reachable ports, but like ssh they're pretty security-centered and will not THAT easily allow anyone in except in cases of gross misconfiguration.

>ECC is always nice to have

A ZFS scrub with faulty non-ECC memory can fuck up a pool. If you're going to use it, use proper hardware.

I finally got tinyboard working. Now I can run my own image board.

>tfw no qt gf to cuddle for warmth

So why aren't you using a heavily upgraded 20 year old UltraSPARC machine as a homeserver?
>It doesn't have OoOE, so no Spectre/Meltdown
>It has PCI slots(which you can convert to PCIe), so it's expandable and can fit modern storage (I have a couple 1TB drives in it, connected to a SATA II controller)
>It has an open ISA
>It has OpenBoot, an open BIOS
>Runs Gahnoo/Loonix
>It automagically restores your virginity after (an easily achievable) sixty-day uptime

Attached: IMG_20171106_024757.jpg (2560x1920, 1.84M)

>sixty-day uptime

Anything less than triple digits is shit tier.

>So why aren't you using a heavily upgraded 20 year old UltraSPARC machine as a homeserver?
Not OP, but abysmal power efficiency, bad performance ('specially if you want to do more than just "some" slow storage), not actually entirely easy use with Gentoo/Linux.

keep trying user, I'm sure you'll find someone