I/O table leak of the X390 and X399 Platform, used in AMD's upcoming 16 to apparently 12 core chips for productivity

I/O table leak of the X390 and X399 Platform, used in AMD's upcoming 16 to apparently 12 core chips for productivity.

Apparently it's multi-socket as well.
Shitload of PCIe lanes, some 46.
Quad channel memory.
2x 10GbE!!!! DIRECTLY ON THE CPU, NOT CHIPSET!

Other urls found in this thread:

en.wikipedia.org/wiki/ATX#Power_supply
videocardz.com/67594/rumor-amd-x390-and-x399-chipsets-diagrams-leaked
twitter.com/NSFWRedditVideo

2/3

3/3


2P table

Looks like a evga board pretty sweet if thats the case

IPMI on consumer stuff seems pretty weird though.

Why did the designers smarten up enough to use right angled sata connectors but not right angle power connectors?

Because it's pointless.

Okay, now that I got a better look at this.

X399 is the highend chipset with dual CPU support, X390 is the single socket version.

X399 gets Dual 10GbE directly from the CPU and 2GbE from the chipset, has 96 PCie lanes CPU, and 10 PCIe 3.0 PCIe lanes from the chipset.

X390 is mostly the same but with one socket, 2x1GbE, one 10GbE and half the lanes.
Also an external clock generator for the memory

Also from what I know 32 PCIe lanes are consumed for cross-socket communication.

meaning?
>I'm not completely hardware savvy

So, could this be a 'cheap' processing server
>or cheap db held in ram type server

It means if you're asking this isn't the hardware for you.

No offense, but I won't recommend a Class 8 truck to someone looking for a Sedan

When are the Zen Opterons coming out? I was waiting for them all along anyway. No real reason to upgrade from 8-core Sandy Bridge to 8-core Ryzen.

Q2

>if you don't keep up with every detail and know the comparisons relative to xeon. Then fuck off

Thanks. Great help. Do you work for Amazon?
>I should not buy hardware
>I should give you money instead

I'm not here to help your illiterate ass, consult Google.

Fucking faggots can post all day about phones, chinkpads, gaymen and ricing fagnix but when any interesting topic comes up you don't say shit. Why do I waste my fucking time here

>AMD
>interesting
Minimum lels m8

It's not. It would ease removal/installation, because molex connectors are fucking tight when new, and you wouldn't need to support the board below the connector so it doesn't break/bend.

...

>Socket AM44

Is that a typo?

The 10GbE is not going through the chipset, but they're still discrete MACs hanging off PCIe links and not on-die integrated like Xeon-Ds.

Why are you acting like this is a big deal?

>single socket LGA 16c/32t platform
>dual socket LGA boards as well

If AMD is pricing Naples around $4000-$5000 then these chips are going to be relatively affordable. Looks like Xeons are going to be replaced in a lot of markets.

AMD hadn't announced any other socket, it might not be a typo

LGA 2066 is DOA

I hope it's a placeholder name, because it sounds idiotic

>inb4 AMD can't into marketing

They can't, it's true.

First of all, ATX 24pin isn't a molex, secondly, all power connectors come with clips which make removal piss easy.

(not him)
Smart tho, would be easier to reinforce.
I hate molex connectors

What? Xeon-D's 10GbE+ goes through the PCIe lanes on the CPU as well, what else is it supposed to go through?

Uh, yes it is.

en.wikipedia.org/wiki/ATX#Power_supply

Oh, oki.

But unlike old 4pin molex, they're piss easy to remove without breaking the PCB and everything attached to it.

Then I must be doing something wrong, because connecting/disconnecting the 24-pin cable is a fucking ordeal. Thankfully, I never broke or bent a board.

You really gotta be sure to make sure the clip is unclipped, then you rock it back and fourth. Not too difficult, but the sharp edges aren't exactly comfy.

I dunno, I've dealt with a lot of PSUs and mobos and have had little to no issue removing the 24pin power connector, just press the clip, nudge to the left and right and it practically pops out.

>press the clip
>nudge left and right
>still stuck

Every time

where are you seeing that?
that diagram shows the 10GBase-KR hanging directly off the CPUs.

You'll get the feel for it in time, can't really help you there.

>16 TO 12
what?

so 12 cores to 16 cores?

weren't they making a 32 core naples or is X390 and X399 not their server set?

10gbe is going to fucking be amazing

There's two platforms, the 32 core and 16 core one, and however -core variants of each they can get.

This is the 16 core platform

oh ok

and the other diagram has intel that has the 10gbe too? I didn't know that.. I guess I'm behind since I only have two 2670's but w/e

was thinking of going for ryzen since the 1700X I have is doing great in my workload might sell off my old intel one in a few years we'll see

getting to the day & age where I don't need a server to render shit anymore though so might not even do it

Most $500 Xeons support 10GbE as well.

ALL processors support 10GbE as long as you have the PCIe lanes for the external card.

Yeah, but people prefer the NICs hanging off the mobo which don't cost $500 a pop

Those are motherboards using a third party chipset which is the same as a expansion card-- they are not from the PCH itself

Zen doesn't appear to have on-die anything for Ethernet, so having discrete on-board 10GbE standard is nice but nothing too special.

The only Xeons that do on-die 10GbE (MACs but not PHYs) are Xeon-Ds IIRC, which are single socket only up to 16c.

The fundamental challenge with on-die Ethernet for Zeppelin is that 40GbE is where datacenter stuff is going but that you really couldn't reasonably split a 40 GbE link across multiple dies, and having 4 or 8 40 GbE links in Naples would be obscene overkill in an ecosystem where almost everything is a 1U server talking to a top-of-rack switch with a single network link.

>IPMI still uses i2c


Isn't there some bus that's not running at 400kbit/s speeds?

Why do you need 10GbE or 40 when infiniband and omnipath exists?

IPMI/BNC requirement is only one, work at all times.

i2c is perfect for that, albeit slow.
For fucks sakes the shit still works over telnet.
It could probably work over fucking gopher if someone bothered to.

Will it support current am4 or is it am4+ or some other socket?

Question, what's the big difference?
It's still pulling bandwidth off the CPU pcie lanes, not the chipset PCIe lanes, nor from some PLX switch, how much slower can these 10gbe links be?

>will a CPU that's a MCM with twice the cores support X consumer socket

Lol, no.
First of all wasn't this supposed to be LGA?

Naples/Snowy Owl/whatever will use LGA sockets and have four/eight memory channels, so no

You haven't seen ULTRA HYPER LOW MIRACLE LATENCY until you've strapped a NIC to the die, mate.

It's nothing folks! Just a rumor.

Go back to you work.

Fuck do those use TSV's to connect them? Imagine the fucking latency, literally L3-tier, 20ns+ roundtrip, whoa its like some matrix shit dude, it can practically load 700 trackers found on CNN in under 2 minutes!

Damn now I want this chip.
Was considering the next hedt from Intel, but if comes at 1000 or less with decent memory and frequency I gonna get it.

Why do you want this? It's more expensive and runs games slower than Intel

Maybe he doesn't care about games? Just a guess.

Lol lets be real dude

I don't think so. Might be an early mockup

videocardz.com/67594/rumor-amd-x390-and-x399-chipsets-diagrams-leaked

>that spic troll

This fucking retard is everywhere. He's ruining RWT's forums and is now doing the same to S|A and VideoCardz

>firewire
>TPM

Interedasting

Please use words i can understand.
Is intel finished and bankrupt?
Are intel fags on suicide watch?

They have been since Ryzen launched, nothing new in that department.

N-no, Intel is fine!

yea I have one PCIE card for them but I like it built in as he said.. I never see any boards with it

Do we have any info on the release date?

Is this really competing with Xeon-D? D's are clocked very low

Ryzen has lower IPC, so it makes sense it's clocked higher to compete. The saving graces of Ryzen are SMT and the fact that it can actually maintain 6 instructions per clock, something Intel has trouble with (and theirs is 8-wide)

It's more about power than latency, though there is a tiny sliver of a benefit that some HFT folks could care about if only the precessors weren't comparatively bargain bin tier.

Also, I don't know where you're drawing your assumption that a SoC peripheral has to necessarily be hanging off an on-die PCIe interface.

That's a whole layer of buffering, flow control, etc. that's entirely unnecessary.

I've not seen an architectural block diagram for a Xeon-D on that level, but I'd be astonished if the MACs weren't more tightly integrated just to save die space and power.

This is usually a product of one of three things:
A) The retaining clip is too stiff/hook is too long
B) The socket hook is too long
C) A+B = I hope you have strong hands

Depending on the hardware, I've had it do everything from pop out with a gentle squeeze to needing sufficient force to produce diamonds.