For anyone still in doubt Zeppelin does indeed support 10G Base

For anyone still in doubt Zeppelin does indeed support 10G Base.

There's also some neat stuff in the programmer reference manual for Zen.

support.amd.com/TechDocs/54945_PPR_Family_17h_Models_00h-0Fh.pdf

Warning: Can cause autistic screeching

it's base-KR which is only used in modular backplane datacenter systems.

I don't see xaui or xgmii

Give it to me straight. Just how doomed is Intel?

Fuck off

what is this

Somewhat relevant


Zen HEDT CPU's are called Threadripper(good job marketing..)
Each CPU will include 64 PCI-E Lanes!
It includes 4 CCX's.
Lower SKU(Probably 12/24) 140W TDP, Higher SKU (Probably 16/32) 180W TDP.
Socket will be an SP3 LGA
Platform's name will probably be X399
Chips will be B2 revisions.
32MB L3 Cache
ES's are 3,3 or 3,4 Ghz base and 3,7 Ghz Boost
It is aimed for Retail SKU to have 3,6 Base/4 Ghz Boost
ES's that are in the wild have 2500 CB R15.
Infinity Fabric can have a bandwidth up to 100GB/s

Mind telling us the difference?

Can I run my R5 on this x399 motherboard

Unless your particular R5 has 3500 LGA pins, then that's a no.

lame

>Infinity Fabric can have a bandwidth up to 100GB/s
Will actually end up faster in gaming due to higher TDP(more than 2 core XFR) and much faster data fabric speed, probably scales very good with higher bandwidth

Double clocked IF was inevitable for MCM parts, or else >2 mem channel parts would have been completely neutered.

I honestly wouldn't be surprised if Zen2/3 parts had even double again (~100/200 GB/s) bw.

>Chips will be B2 revisions.
This is probably the holdup for the release, this will allow to keep the clocks the same but have IF clocked higher.

IF is super low clocked (256b @ ~0.9-1.6 GHz) in customer parts. Running it at 1.8-3.2 GHz is not the holdup.

If anything AMD wants to boost perf/W through process tweaks so the HEDT/Naples parts look more competitive against SL-E/X.

10G Base-KR describes a differential pair PHY that's designed for chip-to-chip communication on PCBs. For instance, it can be used to talk to different blades in a modular system. You can't connectorize it and hook up cables since it's only good for 1-meter distances, so you're not gonna have LAN parties with this.

If you want to hook up ethernet (RJ45 twisted pair, optical, etc), you'll need to connect to a separate PHY chip using the aforementioned SGMII, but you'll only get 1gig out of that.

What does a 10GbE MAC usually talk in to a SFP+ port?

probably says hello and asks how your day was

The 10GbE MAC signals are called XAUI. The SFP signals are called SFI.

TI makes a chip ( TLK10232 ) that converts the XAUI MAC to SFI in SFP+ ports, for example.

> bumping for interest

Who cares? 10 gig NICs are cheap.

>inb4 switches are expensive
You can buy 40 port Cisco Nexus 5020s on eBay for under $400. And the 2000 series fabric extenders for 1GbE ports are like $50 each.

Is that somewhat of an oversimplification?

I know 10GbE standards are mess, but AFAIK, 10GBASE-KR's whole deal is that it uses 10GBASE-LR/ER/SR's Physical Layer Coding, so I thought that was to make simpler connections/conversions to XFP/SFP+ transceivers.

Intel is fucke'd and finished!

sauce on any and all of this?

Holy Shit.
This really is a Xeon murdering machine, from the very top all the way down.

32 of the 34 10-12 Gb/s PHYd can be multiplexed between any combination of 24 PCIe lanes, 8 SATA channels, 2 inter-socket links of at 8 lanes, and 4 Ethernet links up to 10GBASE-KR.

Literally zero need for a chipset unless you want even more USB or SATA.

This kills the Xeon-D.

>a 140 watt - 180 watt chip
>kills 20 - 45 watt chips
no shit

>SATA
no one cares, just about anyone using this will be using SAS disks unless it is something like a SATA DOM as a boot disk.

> 140-180W chip

You missed the point here.
A -single- Zen die is a lot more versatile as a SOC than current Xeon-D chips.
The only question is power draw when underclocked/undervolted into sub-2GHz territory.

Nice catch, OP.

Some observations:
- We can see what the "extra" 2 PHY lanes on the die (bottom left-ish center) are for. I'm not familiar with "WAFL" in this context though.
- 2x(4+4+4+2+2) PHY lane pattern in die shots matches actual design.
- 4 dedicated GMI (Global Memory Interconnect links), each with split 2*128b PHYs? They should be at the die periphery and be drastically smaller per lane than the E12G PHYs, but it's hard to pin them down. Maybe bottom left-center, bottom right-center, and the top right-center, but where's the 4th? If each GMI link operates at full Data Fabric bw (up to ~)
- xGMI is apparently the nomenclature for "external"(?) GMI links over the generalized PHYs
- 2 Unified Memory Controllers connected independently to the Data Fabric seems wasteful of crossbar ports at first but probably helps keep mem write congestion in the controllers and out of the fabric. Also no 3DPC confirmed.
- No mention in the PDF about the AES-128 scrambling for the UMCs.
- Mention of 2 "heavy" and 2 "lite" Ethernet controllers without any further explanation, but apparently all 4 can be 10GBASE-KR, but no support for lane ganging as 40GBASE-KR4 oddly.
- Surprised there are only 8 SATA 3.0 ports supported on the 32 total PHY lanes, since there was an easy opportunity for on-die support of HDD arrays drastically bigger than Xeon-D's 6.

Overall, it's nice to see that the ~55% of Zeppelin that's not cores+L3 was put to decent use.

>If each GMI link operates at full Data Fabric bw (up to ~)

whoops. meant to say,

If each GMI link operates at full Data Fabric bw (up to ~100 GB/s = 256b/clock at full DDR4 controller rate), 2 links between a pair of dies is huge overkill, and the 4th link seems odd in particular for 4-die MCMs.

Could this be for a 4*Zeppelin + 1*Vega11 APU, or 3*Zeppelin + 1*Vega11 APU with 2x GMI links per Zeppelin?

Can you get out photoshop/GIMP and actual mark those phys and stuff you're talking about?

can you handle some hot mspaint action?

Know what the SRAM above the CCXs are? Right below the "maybe GMI??" in green at the top and left of the DDR4 PHYs

Yeah, I haven't pinned that down yet.

The AES encryption on the UMCs needs buffers for pending reads (CTR-mode XOR masks generated on the fly) and writes (the plaintext while XOR masks are generated in the AES pipelines), but those seem awfully big.

I expect them to be specialized buffers for the controllers hanging off the IO Hub, since you'd want to keep tons of small reads and writes from clogging the DDR4 channels, which are optimized for sequential data bursts. Could reasonably be anything from IOMMU mappings to Ethernet frame buffers and socket mappings.

Very interesting thread thank you anons.

You're probably not gonna know unless AMD tells you you pay someone like chipworks

What's the sauce boss?

>Announcement; COMPUTEX at Taiwan, sales will start after 2-3 weeks following COMPUTEX
Dohoho. Is this for real? That's when Intel is supposed to officially announce X299/Skylake-X. Availability even sooner than Skylake-X? That would be hilarious.

And for comparison, the similarly sized Polaris 10.

If you can read a spec sheet and know how to count, it should be pretty easy to identify the 4*9 CUs, 16 PCIe lanes, 256b GDDR5 lanes, 4*4 RBEs, and 6 display controllers. The center column and center side blocks aren't as clear to me, but it's potentially geometry on the sides and likely mostly GCP/ACEs/HWSs in the middle.

Zen looks way less dense to me, all that blue in between that looks like it has nothing.
They're probably interconnects or something, I don't know

probably various controllers for the IO just wedged in wherever

bump

a Zen CPU contains memory controllers, the northbridge, SATA- and USB controllers.

And a GPU contains its own far wider memory controllers (128b on Zen vs 256-512b on GPUs), and its own I/O, usually HDMI with a bunch of DVI outs