NAS BSD thread

NVMe drives are starting to pop like roaches and in a few years they'll be relatively cheap for mass priority storage or caches, 10GbE isn't gonna cut it when drives can do 2200MB+ transfers, SFP+ will be the minimum requirement while SFP28 or Infiniband will be ideal but would be absurdly expensive.

Naturally this will require some ungodly expensive switches too,

Why is networking gear so fucking expensive, how does it make sense that a midrange 10GbE NIC is more expensive than a 10 billion transistor GPU

Other urls found in this thread:

searchsolidstatestorage.techtarget.com/definition/NVMe-over-Fabrics-Nonvolatile-Memory-Express-over-Fabrics
en.wikipedia.org/wiki/Terabit_Ethernet
ebay.com/itm/New-Dell-Force10-10GBe-SFP-5M-Copper-Cable-Assy-CBL-10GSFP-DAC-5M-5CN56-/282598866289
servethehome.com/
twitter.com/AnonBabble

Because no sane person needs more that the regular Gbit ethernet outside of a corporate setting. In said corporate setting the prices are irrelevant.

Fucking normalfags

>BWAAAAA i can't have non-normie stuff on a budget BWAAAAA!!!

>10GbE isn't gonna cut it when drives can do 2200MB+ transfers
Yes it is, just bond 2 interfaces, jeez.

Bonding is bugged as fuck.

>using InfiniBand to connect to your NAS instead of just connecting with an external PCIe cable

Enjoy your latency

searchsolidstatestorage.techtarget.com/definition/NVMe-over-Fabrics-Nonvolatile-Memory-Express-over-Fabrics

NVMe over Fabrics is literally NVMe over Infiniband, ya doofus.

>inb4 RoCE or iWARP
If you can afford 40 GbE, I don't see why you would make this thread anyway.

nVMe over RoCE fekkit.
RoCE won out did it not? It's what microshaft uses and many others?.. NvME over ROCE.

You can get a 10gbE SFP+ mellanox Nic for like $30 w/ a cable.. wtf are you on about? Go out and buy one off ebay.

NVMf works over ethernet and SFP too, why wouldn't it when IF has limited cable length compared to ethernet

en.wikipedia.org/wiki/Terabit_Ethernet

Anything else might as well go kill itself.

>ethernet
It's only meaningful if you have a 40 GbE ethernet card.

>why wouldn't it when IF has limited cable length compared to ethernet
Ethernet is ~150 meters, and then you have to add switches in your topology.

You can get PCIe fiber cables that go 100 meters.

>You can get PCIe fiber cables that go 100 meters.

>no infrastructure
>expensive
>less length anyway

>expensive
Nope. A transparent fiber bridge to an expansion box will cost you a fraction of what a 40 GbE adapter will.

> It's only meaningful if you have a 40 GbE ethernet card.
Dumbass

> Ethernet is ~150 meters, and then you have to add switches in your topology.
People use RoCe in enterprise .. aka, infiniband over standard ethernet. Infiniband is maybe used in HPC and is a bitch to deal w/. There's no reason to be running that shit at home.

> You can get PCIe fiber cables that go 100 meters.
Just use copper DAC cables and be done w/ it.

> Nope. A transparent fiber bridge to an expansion box will cost you a fraction of what a 40 GbE adapter will.
You can get a 40 GbE card for $100 on fleebay

Reminder that at those speed, SFP is cheaper than xBASE-T cooper

networking gear fall sinto two categories; cheap shit and uber overpriced shit. like $10 for an 8 port unmanaged switch or $100 for a 8 port (4x poe). there doesnt seem to be a middle ground. anything rack mounted is over priced simply for being rack mountable

you are a fucking retard, LACP works just fine.

why would i want to use copper if i can buy fiber. it's feature proof and costs less than twinax

that is not infiniband, that's twinax.

where is fibre cheaper than cat6?

ROI, kid.

a new cisco 3m sfp+ 10g costs around 70 while a 30m lc-lc fibre costs 67.

because cooper meme isn't designed for those frequencies, yet faggots still want to shoehorn a material not fit for the job where it doesn't fucking belong.

> Can buy an SFP+ 10Gbe card and copper DACs for $30 on fleebay.

The idiotic comments in this thread are gems.
You don't need to run fucking fiber for 10GbE unless you need a very long run. Otherwise, Copper works just fine and is used by the majority of people in a home setting ...

5M SFP+ Copper DAC :
ebay.com/itm/New-Dell-Force10-10GBe-SFP-5M-Copper-Cable-Assy-CBL-10GSFP-DAC-5M-5CN56-/282598866289
$15 + free shipping.

Wtf are you clownasses ranting about? Go buy a fucking $20/$30 Mellanox SFP+ Nic and and $15 fucking cable and be done w/ it. Stop all this enterprise larping bullshit about infiniband/40GbE and fucking fiber optic cables.

See some of the worst tech advice a fucking comments on /g.
Go over to servethehome.com/ or homelab and ask what you need for what you're trying to achieve

have fun returning your sfp+ garbage after you figure out that your switch only accepts specific brands.

>You can get a 10gbE SFP+ mellanox Nic for like $30 w/ a cable.. wtf are you on about? Go out and buy one off ebay.
OP along with everyone else in this thread is a retard

$100 isnt uber overpriced shit

forgot my pic

>virtualizing routers
>virtualizing wifi controller
>active directory
>ms CA
>ms DNS

shit's so cringe. you aren't impressing anyone with your cracked esxi enterprise plus garbage setup.

>i wish i could have nice things
>the post

i already have an enterprise plus environment at work, no need to be edgy and run that shit at home too to impress 12 year old dweebs on Sup Forums.

nice damage control though

>i wish i could have nice things instead of the owner of my company having nice things and some autist on Sup Forums
>the post

much desperate to justify your garbage on the internet. more damage control please, you are just proving my point

>$20/$30 Mellanox SFP+
>buying 2009 hardware

>beinna server thread
>have no server to post pics of
>just some gayming shitbox full of leds
i'm sorry user your fragile ego was damaged today

How do I start setting up a internal DNS server that can replace Google/Open DNS or my ISPs DNS?

>Install Windows DNS Server role
>point your clients at it
thats really all you need to do to query the root servers

>nice things
>esx
haha

B-b-but my Linux and BSD...

Query root servers? Can't I just download or cache them?

>B-b-but my Linux and BSD...
Then go figure it out and spend half a day setting it up.

>Query root servers? Can't I just download or cache them?
You dont understand how DNS works do you? You query the root servers for the name servers for a domain, then in turn query the domain name servers directly. You can't just download all the DNS information for a domain as just about no one allows zone transfers.

And every DNS server will cache the responses for a configurable amount of time.

Fuck this shit I'll just use IPs directly.

lmao what do you even need this many VMs for?

there arent any VMs in that pic, but yeah there about 90 of them across 2 hosts

Where do I get a 10GBE switch then faggot? That's the expensive part

ebay. catalyst 3750E/3560E for 2 ports + GbE. HP 5800 for 4/8 ports + GbE. both are layer 3

>expensive
Under $300

Ubiquiti and Microtic have pretty cheap ones.

I bought a 24 port rackmount gigabit switch for $30
Fuck this shit.

i don't feel the need to impress dweebs on Sup Forums with my setup and making screenshots of a cli kind of is retarded anyway.

enjoy your point and click adventure kid, i'm sure impress 90% tech illiterate fa/g/gots.

quanta lb2 and lb4 usually go for very cheap
these are good, but are server room tier fan noise

>cli
i bet your linux distributions are so common they even have a faggy icon in screenfetch

>gigabit switch
lmao

Install pfSense and enable the "DNS Server" bit.

What tghe fuck are you retards talking about, why teh fuck do you n eed more than 1GB are you fucking retarded, dumb shitting neckbeard feceseaters

Isn't that just a caching server like dnsmasq?

who are you trying to impress anyway, yourself that you are worth something and/or a community of which the majority can't even google basic problems.

ITT:
>hurr durr none of us understand the difference between authorative, caching, and recursive dns

it triggers people when others have nice things on a board full of phone posters and gaymers

...

here are the things that trigger me when i see your screenshots:
>virtualized router for the "extra security"
>microsoft dhcp
>microsoft dns
>microsoft ntp
>microsoft ca
>microsoft exchange
>microsoft sharepoint
>no proper names for vms or dvs
>meming with lfs while using windtoddler for everything else

it's one thing to have enterprise software, it's another to set it up in away that doesn't look like a hyperactive 12 year old downloaded a bunch of shit of his favorite warez board to impress his other 12 year old school friends on how much a hacker her is.

show at least something interesting, like the sssd.conf for your linux machines (which you most likey don't have because you are a winblows fag), the kerberos setup, your AD tree, network diagrams, something that doesn't scream "look guys im running esxi for all these vms and i can do vlan too"

>"extra security"
when did i ever say that? i have a pair of FTDs because I developed a crack for it you retard. oh noes, let me just spend $15k on a pair of entry level Firepower 2110s and double my electric bill at the same time.

>>microsoft dhcp
>microsoft dns
>>microsoft ca
>microsoft exchange
>microsoft sharepoint
so

>>microsoft ntp
its ntpd you tard

>>no proper names for vms or dvs
they have proper names you tard

>>meming with lfs while using windtoddler for everything else
wut

>show at least something interesting, like the sssd.conf for your linux machines
how the fuck is this interesting

>which you most likey don't have because you are a winblows fag
see you pleb

>vlan
>not pvlan
I see you learned babys first network term

How do NAS works, do you just plug it and the PC into the router and that's it?

>Firepower 2110s
>can't even setup a router on openbsd
classic pleb

>its ntpd you tard
im suprised the wintoddler can configure an ntpd, i would have expected it to be to difficult.

>see you pleb
grand, you have posted a screenfetch. how about you post it with cat /etc/sssd/sssd.conf or klist

>vlan
>not pvlan
>I see you learned babys first network term
what an edge lord you are, holy shit

How often are you transferring fuck huge files locally that you need to spend boat loads of cash on networking equipment to support it?

stay butthurt user that some people have nice things and that my FTDv cluster requires more resources than your shitbox even has

Sysadmin with a real stack of shit and I don't even have that much crap to manage with 80 employees or so. What the fuck are you doing in your home that you need this?

>sysadmin
>at a small business with 80 employees
user please, your a glorified helpdesk guy who sets up printers and fixes people's email problems. i have more VMs running at any given time than the company you work for has people.

i don't need that many resources to run pf, postfix, ntpd, bind, a CA. shit runs just fine on the pi and is performing well enough in my lan, and i can make it HA while being below 10w.

stay edgy, pseudo

>look at how huge my e-penis is
jesus kiddo, you are pathetic

>being below 10w.
enjoy freezing to death now that it is getting cold out

not nearly as pathetic as a helpdesk guy at a small business.

its mikrotik you dumb american fatass
nonetheless mikrotik switches are fucking shit

I love Sup Forums, this is great.

wew kiddo, your damage control comes of really desperate.

i'm sure your power consumption graphs will bring all the girls to the yard.

>not nearly as pathetic as a helpdesk guy at a small business.
exactly as pathetic. the way you setup your dvs is an indicator that you have no idea how to do sane networking. the way you name your virtual machines inside your esx environment shows you have no idea how a clean and sane virtual environment should look like.

if i would deliver this shit at work like you do, i'd get fired immediately.

>autistic screeching
stay mad user that my homelab cost more than your car

jokes on you, i don't even have a car.

an actual pro tip: best practice is a dedicated vmotion lan per cluster and not globally.

not that user but i work as a network egineer for ISP - ~60K domestic users, ~500 IPTV resellers, ~300K set-top-boxes. all on our own technology.

company employs 40 people. you sir are an idiot.

>Hey mum, wifi is great isn't it?

Freenas 11
6x 4tb raidz1
2x 240gb SSD mirror
72gb ecc mom
2x xeon 5650x

for what do you need mirror ssds, what do you possibly run that requires that IO, also what drives?

also why raidz1, greedy on the space?

Large scale Plex server,. Mirrored SSD's for jails (burn that cpu transcoding all that sweet sweet anime)

Z1 cause 4tb more of stuff is worth more to me than complete data integrity when a drive dies

4tb wd red drives 3 different batches of 2 drives

a bit overkill but i guess that makes sense. i assume you run freenas?

Freenas, running on average 3-6 simultaneous 1080p streams can take their toll

that's why i don't transcode, i used to do that with ps3mediaplayer before i got a nuc and god it was fucking awful. i can enjoy some hi10p stuff on a n3150 without going above 20% on all cores

didnt these things use as much power as a desktop pc running at full load? also enterprise routers are huge and may have loud fans too.

what would a normal user do with these speeds? its very rare that i even use the 100mbps that my obsolete chink router can do.

any file transfer ssd to ssd or to a 5-6 drive nas is pretty much bottlenecked by gigabit ethernet already
many people have internet connections way faster than 100mbit/s
we are on the verge of gigabit ethernet being useless shit for a lot of users

Hence the 'cheap 10GbE switch'