Naples

LOOK

AT

ALL

DOZE

DIMMS


ITS OVER INTEL IS FINISHED AMD WON THE DATABOWL

That's a fucking LOT of memory.

inb4 180W lelelel typical AMD xD

8 DDR4 channels per socket is nice, I just wish there was a fucking workstation platform AMD was selling now that supported >64GB total capacity.

16c MCMs or just buffered ECC support, I don't even give a fuck at this point.

16 per socket, the new Skylake-EX have 12 per socket.

>all that unused pcb space
disgusting

Wat. It's 180W for 32/64 while Intel has 28/56 with 200W.
>he
>doesn't
>know

...

Is is still 4 channels with 3DPC, or is it 6 channels * 2DPC?

>m-muh optane

>platform AMD was selling now
>selling now
>now

please enlighten me.

The latter.

>has IPMI
>has Infiniband
>has ethernet
>has serial or VGA

Nothing else needed, enjoy your needles board power from your 15W chipset

if so, could they squeeze in 6ch*3DPC for 36 slot configs then?

Just waitâ„¢.

On 2 socket? Doubt it.

How much memory does it support?

512gb or 1tb?

Would there be any gains to filling all the slots vs just 1 or 2 with the same total memory? Assuming same statistics on each mem card and all that.

4tb for 2P system AFAIK.

what the fuck are you talking about, idiot.
I'm talking about UNUSED PCB SPACE. For what fucking purpose they made this PCB so big if it's 1/3 unused?

Are you retarded?

Max memory support is 5TB IIRC.

Not that you'll ever get that even with 128GB DIMMs.

Have you considered that PCIe cards extend beyond their slot length, and computers need air flow to cool them?

For starters it's an engineering board, secondly the entire width of the PCB is taken by sockets and DIMMs.

Yeah, Ryzen ASICs each have 2 DDR4 channels, so you get 8 channels on a 4 die Naples chip.

You won't get full bandwidth without at least half the slots populated.

I counted 32 DIMMs but I could be wrong. Fuck, that would hold 512 GB of RAM using 16 GB sticks.

GAME OVER
A
M
E
O
V
E
R

Oy housefires.

"I've never seen a server and am also retarded" the post

Like the good old times.

>these are turbo frequencies

Might as well use 128gb sticks for 4TB of RAM

Literally nobody buys 128GB DIMMs, at least not anybody outside of Amazon/Google/MS

Most enterprise costumers are satisfied with 32GB DIMMs

>Single Socket Holocaust
LMAO
Hyper-pipeline technology

Imagine having an entire large production database loaded in RAM.
The future is now!

...

>that AMD SMT performance on 32 cores

Holy Molly

AMD needs to fix it's enterprise level support and warranty department first before they grab hold of the server market again.

Those huge cloud providers like AWS, Google Cloud, Azure, etc. Build out their own servers and buy components in bulk, because it's cheaper and they can also easily design them around their facilities custom cooling systems. Dealing with AMD for RMA is a nightmare at the present moment, Intel will literally just ship out another CPU in a few hours after you report it.

Even conventional server builders like Dell, Supermicro, HP, etc. will probably be hesitant to sell AMD servers on a large-scale because of the same issue in terms of delay in RMA.

It's more then just performance. Intel's enterprise/datacenter level account managers make stuff happen for these providers on massive scales.

That's the one!

It's not about raw capacity, it's about capacity/$ and bandwidth.

32GB ECC DIMMs ~= $300, 64GB ~= $900, 128GB = you can't afford it

So you can now get 1TB RAM in one box for ~$10k vs. ~$14k in a Broadwell Xeon, plus twice the bandwidth. New Xeons with 6 channels each will sit somewhere in between.

I think AMD is aware of that more than you.
If they're selling a competitive HPC chip, they won't sell it without good support, hopefully something as smart as Lisa Su understands that, since she did kinda work in that field in IBM

You're not wrong, but I wouldn't expect a private citizen to purchase that much RAM. I was thinking enterprise.

From what I've been hearing, not all skylake xeons are 6 channel(I assume the 18-28c ones only) while it's 100% guaranteed that all 4 die MCMs Naples will be 8 channel.

Nice pasta Schlomo.
I've seen it several times already.

>180w
Mom basement right

128 threads, I could serve a decent running VM to 32 users with this.
I'd pay out of the ass for switches though.

Why is high grade networking hardware so fucking expensive compared to 6 billion+ transistor ICs?

1. Because jews
2. Because networking hardware runs ultra low latency ASICs on real time OSes
3. Lower demand
4. See jews

because consumers don't need 10+ GbE so volumes won't be high, and enterprises both need it and can pay out the nose for it.

there does appear to be some sort of nigger-tier 2.5 and 5 GbE over UTP standard emerging though. presumably doesn't need the LDPC coding with the 3-5W decoders.

Sparse amount of CPU VRMs for 180W chips, even at stock.

HOW CAN RAMLETS EVEN COMPETE?

You're just looking at quantity, some amps are rated at 20A and higher which is more than enough for a 180W IC even in low number.

And from what I know this is a validation board, so there's not gonna be strict power circuitry on it

>stuck one of these in a G40
>runs for 5 minutes before the power draw and heat forces a shutdown
GOD TIER

>For starters it's an engineering board
only valid point.
kys

32GB DIMMs are the current size/$ sweet spot for anybody above baseline consumer level.

I'm looking forward to 16c32t workstation Ryzen I can throw 4*32GB in and leave room for an upgrade to 256 GB total in the future if I wanted.

While correct, AMD doesn't need to take over the market, they just need to crack it open to start becoming profitable again.

When they start becoming profitable, they can hire all sorts of people that were culled just for the enterprise service alone.

If AMD takes over just 5% of the data center market, that's several billion dollars for them to reinvest and keep going forward.

buy an lb6m bro.
buy mellanox connect x-2's off ebay for 15 a pop, and 5meter sfp+ cables for 15 as well.
enjoy 10gig networking for up to 26 appliances for 200 + 30 bucks per appliance.

its amazing.

God bless ebay prices on used server hardware

Can you connect an infiniband port to a 10gbit sfp port?

A lot of Mellanox equipment (most of their card, not sure how many of the switches) can talk with either ethernet or IB.

In general, no, though.
Also not sure how many Mellanox cards are SFP+ (not SFP) vs. QSFP. Infiniband has been at least "40Gb" (actually 32 you filthy desert kikes, you're not allowed to count coding 8b10b overhead) for quite a while now. Older 10Gb (8Gb) IB might be over the old CX connectors only.

Can I play gayms on It?

...

>I think AMD is aware of that more than you.

Clearly not considering their market share.