What feelings does looking at this picture inspire in you?

What feelings does looking at this picture inspire in you?

Other urls found in this thread:

servethehome.com/second-amd-naples-server-pictures-platform-block-diagram-112-pcie-lanes/
twitter.com/SFWRedditImages

Boner. But what's this?

servethehome.com/second-amd-naples-server-pictures-platform-block-diagram-112-pcie-lanes/

>tfw no 2tb ram


why live

Man that's a fuckton of RAM and I/O.

Indifference?

I/O is clearly overkill outside of niche applications, not that it hurts to have more, but holy shit that 16 DIMMs per socket and that much and bandwidth is crazy.

RAGE.
WHERE THE FUCK ARE MY ARM CHIPS

>I/O is clearly overkill
ML market says hello.

Well, I guess there are those that stick 6 GPUs on a 2P board, guess that will be nice.

It's fucking noisy.

>Well, I guess there are those that stick 6 GPUs on a 2P board, guess that will be nice.
And ML market is growing, and AMD also offers Radeon Instinct cards for GPU acceleration.

lust

comfiness

Fear

depression

Nice to see you to, Brian.

How does cooling work on these kind of server boards? I can see only passive on the pic, they have some fan in the case or how does it work

*notices ur unpopulated riser slots*

OWO what's this???

there's usually a bunch of fans up front (or behind the hdd bays i guess)

everything beyond them can get away with passive heatsinks because air is already being forced through

Why is this setup not more common in hobbyist pc builds? Shouldnt be that hard to sell a case with a bunch of case fans up front.

It's extremely loud.

Envy. I have a laptop with 50% of keys broken, 2GB ram, 1.5GHZ processor, no graphics card. Shitttttt.

for you

this

also this
servers have to be space-efficient and the usual 1U and 2U rack servers have little fans (compared to usual desktop ones) running on high RPM because it's intended for datacenters where no one is around 99% of the time

I guess you could build a desktop with nothing but three 14cm fans in the front and passive heatsinks, but I think you'd need a lot higher air pressure for it to be effective so it would be louder.

1U racks are so sexy.
There's something about ultra dense hardware that just brings out the little girl in me.

yeah, but that's a 2U

1U is pic related
look at those little fans. designed to kill your hearing for a week

How are GPUs horizontally mounted?

Risers

...

Is there a latency hit to PCIe when using riser cards?

The feeling of need of setting up einstein@home BOINC workers on this machine.

None, they're passive and usually have all 16 lanes electrically connected.

Some Ribbon risers though have fewer lanes available.

what's the appeal of free work? You can mow my lawn if you want, for free.

If you have a top boinc server you can gloat at how rich you are.

Mowing your lawn makes you look like a poor nigger.

These 2 are not that same thing.

Could someone please explain how GPU are used outside of gaming? If it's pure number crunching I could understand why scientists and engineers would use them on their workstations. But why would something like this require two GPU's?

Like Krzanich on suicide watch.

number crunching

GPUs can do nothing but crunch or render, or hardware virtualization.
It's one of those 3.

number crunching in the cloud billed by the hour
or render farms for movie studios and shit

noise aside, that kind of cooling relies on tight clearances to work well

only way you'd get that in some normie case is for the case to come with an appropriately-sized heatsink

Sometimes you want to crunch a lot of numbers, beyond what a gaymen rig or workstation could do in a reasonable time

You can do something similar, if you pick components carefully. They sell 4U rack cases that take standard components that work like this, though they usually have an exhaust fan because there's space for it.

If the case provides (or you're willing to make) ductwork that fits with your components it makes this work a lot better.

FAP FAP FAP ....

>4U racks

Oh come on those are fucking huge.

might as well buy an enclosure and blades

Disgust at endless shilling.

This invokes the feelings of muh dick when I think in 2 years I'll be able to get a 100GB system with 32 cores for under $1500

What's the difference between blades and 1U racks?

Shared hardware leading to higher density.
Each blade has its own CPU, memory and disks but power supply, cooling and networking are shared for the whole enclosure.
Management of the whole thing is also a bit easier.

GPUs are insanely good at easy tasks that can be heavily parallized. If you have a task that fits this criteria then graphics cards are far more time and cost effective than processors.

IIRC blades have slots that are fixed in size while racks have rails that can fit various different sizes. Also they tend to all be stand-alone while racks have components that depend on eachother.

>AMD

Into the trash

the last thing I want is a fire in my server oven

Blades don't have their own PSU

>AMD

Literal garbage

gb2/v/

...

The rear of the thing.

>blades have slots that are fixed in size
Not true. They are reconfigurable to a degree and you can have different sized blades all in one enclosure.
Some of the Itanium blades are the size of 4 normal (i.e. half-height) blades

deep desire to own a massively powerful computation server despite not having very much use for it other than boasting about gentoo compilation times on Sup Forums

Blades are the latest hip for microservers

>1TiB RAM
>64/128 core/threads
>10TiB NVMe SSD
>compile kernel and install gentoo in seconds
>compile all qt shit, softwares and bla bla bla
>all in a matter of minutes

fear.

Fucking boss went with AMD in the company bladesystem years ago and now we're doomed to stick with shit processors because you can't vmotion between Intel and AMD
And of fucking course now I'm the lone administrator who has to deal with this shit because everyone else quit or got laid off
RHEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Go cry to your boss to buy Intel INTEL INTEL

that's a "problem" with vmware, not hardware

I put the quotes there because I'm not sure if there is any industry-standard hypervisor that can do live migration between different CPUs

>when your precious shittel gets demolished by AMD's first new server architecture in 6 years even when limiting themselves to Intel's paltry amount of cores and memory speeds

M-M-MUH RING BUUS!

People like you are the reason I'll never buy AMD. Performance be fucked.

Delicious.