How do I create/build cheap 10GbE switch?

I'd like to build my own switch. Possibly using COTS solutions.
> 4-10x 2.5GbE or 10GbE ports
> 2-4 10GbE ports

Is there any other way to do that than buying the switch?

Other urls found in this thread:

amazon.com/Linksys-SE2800-8-Port-Gigabit-Ethernet/dp/B004TLIVBG
mikrotik.com/product/s_rj10
twitter.com/NSFWRedditGif

amazon.com/Linksys-SE2800-8-Port-Gigabit-Ethernet/dp/B004TLIVBG

Why isn't one gigabit enough?

Why not get mikrotik or ubiquiti

>Is there any other way to do that than buying the switch?
Linux box + lots of network interfaces.
You can then get up bridging and what have you.

Sounds like an expensive way to solve the problem though.

>1Gbit per second = 0.125 gigabytes per second
>SATA capable of 6 gigabytes per second
>Why would anyone want 1.25 gigabytes per second instead of 0.125?

Literally what?

>SATA capable of 6 gigabytes per second
There simply isn't any drive that can reach those speeds.
And if there were, such drives would be enterprise class, in which case there's no need to cheap out on a 10Gb switch anyways.

OP here.
Enterprise tries to be cheap whenever it's possible. Plus remember that we are talking about multiple HDDs per server.
So yeah. 10GbE or at least 2.5GbE is s must.

LACP/NIC-teaming

Are you for real?
Why do you think nvme pci express ssd futures are a thing? Now compare that to ssd sata drives speeds, there's a bottleneck right there

>>SATA capable of 6 gigabytes per second
6 GigaBIT/s you stupid fucking nigger

You don't need 10g so just stop

Can't you just do direct 10GSFP+Cu for the connections where it matters?

Just buy a 10Gbit switch stop being a nigger seriously you can't fucking cheese some hardware into a 10G switch

Why can't you just buy a normal 10GbE switch from Amazon? Software bridging isn't going to be competitive with ASIC switches in performance or price.

Unless you actually want a 10GbE router?

I think the dude wants 10gbe network but can't afford/doesn't want to pay that much for a 10gbe switch.

OP, yes, a switch is the cheapest way to do this, and yeah they're expensive, no you can't cheap it out.

Do you really need a whole bunch of 10gbe ports though? You can do point to point 10gbe between the machines that have high bandwidth requirements without a switch, it's just a bit more fuck-y around-y but if your budget is low thems the costs.

The first question is do you really need that much bandwidth? Its REALLY hard to use up 10gigs of bandwidth. If you really did have an application that required that much bandwidth I suspect a budget would not be an issue.

See this guys post. 10Gig switches need specialized hardware intervention to be able to efficiently use all the bandwidth.

If you slapped a 4 port 10Gig card into a desktop PC you'd have really have to know what your doing to be able to tune your netstack into giving the full bandwidth out all ports at once.

If you could figure it out on your own, the value of time spent and hardware would exceed the cost of just buying a 10Gig switch.

What is it your expecting to do?
What is link-aggregation?
10Gb hardware is pretty cost prohibitive unless you have a good reason to need it

Just buy one Good Will

Can you post a link to a good 10gbe switch on amazon? I'm not sure what to look for with all the different sfp, sfp+, 10gigabit cat6 ethernet, etc etc. Help pls

...

>not using 40gbps thunderbolt for your networking
plebs

Somehow OP has the SSD RAIDs and NICs for 10Gbit networking, but can't afford a 10Gbit switch?

You will it into existence.

itt: tards

>You don't need 10g so just stop
>>>>when a single modern hard drive can max out a gbe connection

>Its REALLY hard to use up 10gigs of bandwidth
Yes, but it's also extremely easy to use up 1gig of bandwidth, hence the need for more bandwidth.
It's also really not even hard to use up 10gbe if you have even just a couple SATA SSDs in RAID 0 or a RAM cache.

This is what I did. Cheap 10gbe nic between desktop and server.

>link aggregation
It's trash. Most implementations hash packets so that a single TCP/UDP connection will stay within one NIC, meaning a single connection (such as an iSCSI or SMB connection) will never get more than 1 NIC worth of bandwidth. Even the ones that let you override that (like the one built in to linux) don't have any guarantee of keeping packets in order, so overhead from that kills your performance.

500GB SSD: $150
10gbe NIC: $20
10gbit switches: a lot more

>500GB SSD: $150
>10gbe NIC: $20
>10gbit switches: a lot more

Retarded. If all OP has is a desktop and a server with one SSD, then he can just put the damn SSD in his desktop. Or just mirror network drives during downtime if he absolutely needs an SSD in a server.

Otherwise, if he needs a switch, then he has multiple desktops all sporting SSDs because he is somehow video editing on all of them. And if so, he has spent way more than $170 in hardware.

Yes, because there are no other reasons why someone might want to have their storage be in a dedicated server, like better filesystems, or being able to backup at off hours without needing to leave the desktop on.

When windows has anything as good as ZFS then that might be a viable option.

>windows desktop

Found your problem.

find a switch with a bunch of SFP+ ports and throw these in as neccessary

mikrotik.com/product/s_rj10

>$65 per port
Way easier to just do SFP+ direct connect unless you really need more distance.
You also have to actually find said switch to begin with.

cheap
fast
reliable

>pick 2

Direct connect is both cheaper and more reliable than using a module. What's your point?

Do you really need 10GbE? Shit's fucking expensive.
Just get GbE and some cat6 cables my dog, 125MB/s saturates most hard drives.

>Do you really need 10GbE? Shit's fucking expensive.
It's really not. $40 on ebay gets you 2 NICs and a cable.
>125MB/s saturates most hard drives
Yes, a single hard drive.
Not a RAID.
Not an SSD.
Not a file that's cached in RAM on the server.

gigabit switch with etherchannel

Have you even done any testing to determine that a Gigabit NIC is the bottle neck, thus justifying a 10gig NIC?

You can do all the theoretical math to figure out what bandwidth you would need under ideal situations. However, once you actually implement your setup you may find the real results differ significantly and you invested resources in something of minimal benefit. You really should do testing before purchasing more hardware.

This is coming from a person who's has to make/modify/write 10Gig NIC drivers as part of his job. You need more testing before you go down this route.

>Have you even done any testing
Yes.
This is WITHOUT SSD caching turned on, and it still blows way past what a 1GbE connection would be able to handle (~115MB/s due to protocol overhead and what not).
Gigabit is garbage for performance-sensitive storage, plain and simple. Unless you're running on horrible hardware where the CPU might be the bottleneck, it's not remotely hard to have gbe be the bottleneck.

That's not a valid test setup. You need to get your actual file server running and do a transfer with the server.

That is the actual file server, and CDM uses normal file writes. What's your point?

Well then you might get a boost in performance under ideal read conditions. I guess if sequential reads are a common thing for you than spend the money.

Also don't run your cables too far if they are cheap.