What do you think of 10GBps ethernet...

What do you think of 10GBps ethernet? And is it a viable option for a small business which serves video files over gigabit and saturates it to the max?

Also why does it need a fan?

Other urls found in this thread:

en.wikipedia.org/wiki/Link_aggregation#Linux_bonding_driver
youtube.com/watch?v=1aVHLL5egRY
twitter.com/SFWRedditImages

>pushing 10 billion bits per second
>asking why does it need a fan

This thread is a testament to how retarted OP is

I am looking at deploying it for a small install at work. The wiring looks complex. The cable has a shield, and it looks like termination needs to be perfect. It may be copper, but it is not like 1Gbe where you can slap cat 5e in and be golden.

A 16 port switch (netgear) is ~$1300 iirc.

If you need the bandwidth, you need it. It still looks simpler and cheaper than fiber.

Motherboards don't need fans either and they push a lot more bits.

Not over 100 meters.

>
I thought it works with regular CAT6 (or 5?)

run fibre instead

10Gbps ethernet is great over SFP direct connect and fiber optic. The RJ45 flavor is not catching on.

10GBASE-T has a much higher latency than either 10GBASE-CR or even 1000BASE-T

cat6 is only for shorter distances, and after the run is tested to 500MHz

"much" - meh. It can be higher that 1000bt

It may not matter. it depends on what is going over the link.

Calm your tits, neckbeards.

>
How short does it have to be?

Will 50ft work?

there is a :"google"

category 6 may reach a distance of 55 metres (180 ft) depending on the quality of installation, determined only after re-testing to 500 MHz.

I'd just be paranoid about a questionable link. even a few dropped packets kill performance.

It averages at about 300msec on 10GBASE-T
So...

Just use IEEE 802.11ad newfag.

>Just use a shittier version of 802.11ac that won't even hit ~700Mbps if you're in another room

I think your number should be about 3us.

300ms? pfff. may as well be talking about a modem.

>implying OP doesn't work in a horror funhouse with mirrors on all walls to bounce the signal off
Faggot

What am I doing wrong?
It's a direct connection between two Intel X540-T2s I've bought off of ebay
They hit 890MB/s easily with a three-way SSD RAID0 on one machine and a MyDigital NVMe SSD on the other end, so I know they're 10Gig capable.

I have no idea. 300ms is your ping time?

Something has to be odd about your measurement technique, 890MB/sec is good.

Maybe you need a 6a cable? Maybe it's a power management thing putting the card to sleep unless it is being hammered?

Total guesses.

>Doesn't understand Wireless is shit and there's huge overhead and data fluctuation.

Have you ever done wired networking faggot? It's god-tier.

>Maybe you need a 6a cable?
I've used both a Cat6A cable I've tested before and a new patch Cat6A cable I purchased by accident. Same results.
Also
>2017
>not wiring your home with Cat6A instead of Cat5E

None of the ones I have at work have dedicated fans like that, but I can see why some manufacturers might do that.

I'd got for 10GB SFP+, and not 10GbaseT (which guess over violet cat6) like you pictured. Latency is horrible with 10GbaseT.

You're going to be paying more for your switches, but that's work, so should be no big deal.

If you're really that network constrained, keep your copper gig network for management, and the 10G for data.

Forgot to mention, sunny spend a whole lot more getting dual port NICs. Binding won't help you that much.

>Also why does it need a fan?

10GBASE-T needs basically black magic to work over UTP, in the form of LDPC/Gallager codes, which are basically the patent-free version of Turbo Codes that has very low error correcting code overhead at the expense of very high decoder computational complexity.

Each 10GBT receiver chews down ~5W, which is a lot more than 10GbE optical or shielded twinax ("direct attached copper") connections use. You won't generally see fans on cards with SFP+ (10 GbE) or even QSFP (40GbE) ports.

you two are actually the retards here.

Ducking autocorrect. It always changes my donts to sunnnys for some reason.

Wow. I didn't know most of that. Thanks! I just knew 10GBaseT sucked by comparison.

>at the expense of very high decoder computational complexity
Hmm, does this explain what I was experiencing with my connection?

Even if you were running the connection over wet string, you shouldn't have been seeing anything like 300 ms pings.

Look into your driver and OS settings.

If you can look at port error stats and see like millions of corrupt packet receives, that could be something, but I wouldn't expect it.

just aggregate 2 cards... you can use longer runs and your rates are less impacted by emi.

Or just buy an STP

SFP* ?

No, Shielded Twisted Pair

I like it but my i got my switch from a government garbage sale and its 8 years old.

It does do LACP for my server tho.

Aren't 10G cards and switches fucked expensive? Why not juat aggregate a few gigabit cards for better performance? I don't understand..

Link aggregation does not increase transmission speed. You're stuck with a ~100MB/s connection no matter how many cables you pair. If either end or the Layer 3 device doesn't support aggregation, then you're stuck with 100MB/s speeds and a second connection that can only act as a fail-over.

That is not true, the linux networking library supports balance-tlb which does tx and rx loadbalancing by manipulating the switch's ARP cache. it's less efficient than 802.3ad.

And what if the switch or router you have in the network does not support that specific load balancing? Or, more likely, won't support it despite the manual saying it does?

then it doesn't matter, because it's the client that will switch around the MAC table and distribute the connections over the the trunk.

en.wikipedia.org/wiki/Link_aggregation#Linux_bonding_driver

Huh, that's pretty neat. Either case, I'm still iffy on aggregating ports since you can still run into issues on either end of the connection. Plus, you'll need to look after two to four cables rather than just one, which presents all sorts of challenges if you're wiring in a house of office building. 10Gb is going to be the mainstay for most SOHO users within the next five years, even if 802.3bz catches on since they're essentially based on the 10GBASE-T protocol

Over 9000Mbps Ethernet!!!!!!

In an office environment, if you need 10G, you get 10G cards. And the appropriately shielded cable should be run. For home use, aggregation is a cheaper option.

I think I might be off-base asking this, but wouldn't all this be bottlenecked by the HDD speeds when you're doing any data transfer? Is there any real benefit to it when you're already reaching capped HDD speed? Wouldn't your network need to be running off of mainly high-end SSDs, etc. to get any benefit?

I'm ignorant about the benefits, if anyone could get into whether or not that's the case I'd appreciate it.

Depends. If you have 4 disk 1+0 server, you'll reach 250 MB/s easily. Not to say you can run two servers both in RAID0 with GlusterFS on them, that'll yield you 500 MB/s or 4 Gbps. In that case, you may consider going over 1 Gbps cards to 10 Gbps, but you'll need to upgrade your entire infrastructure: NIC in PC, 10 Gbps wi-fi ac router, cables, etc.

Why not run CAT7 instead?

youtube.com/watch?v=1aVHLL5egRY

Intel death throws

Using it at work. The bandwidth is pure sex.

We have a bunch of workstations with centralized storage and one can't tell network shares from internal storage.

The new 5Gbps Ethernet standard (802.11bz) is going to be go after SMB market.

It is a nice bump over 1Gbps Ethernet but can still run on UTP (CAT5e/CAT6) without an issue. You don't need retrofit everything with more cumbersome and expensive SFP/Optical cabling.

We use it for san. It's glorious.

>How short does it have to be?
>
>Will 50ft work?
Think more in terms of in the same server room. Use fiber for longer links.

the rest of your hardware doesn't deliver fast enough.

>Aren't 10G cards and switches fucked expensive?
Netgear produces some relatively cheap devices.
I bought two Prosafe XS708Ev2 for about 550 euros each.

I just read a magazine about 10GbE
Optical is cheaper and more efficient than cooper
It's French, but do you want scan?

Yes but keep in mind that it consumes more CPU time than 1 Gbit/s