And they talked about 10nm FPGAs

Lawdy lawl.

semiaccurate.com/2017/12/19/intels-claims-fpgas-hbm-dont-hold-water/

>That hint dangled in front of us we started talking to process folk at a big unnamed semiconductor company in Santa Clara. They confirmed that Intel’s 14nm process does not seem to play nice with analog bits, something SemiAccurate has heard for years now. This would explain the lack of high end SerDes on the highest end Intel/Altera FPGA die itself.

>first problem for Intel is the process that this interface chip is made on, 20nm, not the cutting edge 14nm Intel is touting in their press releases. Worse yet it is made on TSMC’s 20nm process because their analog IP works. Some sources indicate this may be the SerDes block from the Arria 10 built on the same process with an EMIB interface grafted on. In any case it would be extremely embarrassing for Intel if news that their 14nm process is pretty much broken for analog designs and they had to turn to a -2 TSMC process to bail them out.

Other urls found in this thread:

altera.com/products/boards_and_kits/dev-kits/altera/max-10-fpga-development-kit.html
linleygroup.com/newsletters/newsletter_detail.php?num=5607
3dincites.com/2016/03/the-hbm-supply-chain-is-open-for-business/
youtu.be/egE-VKoF4ZI?t=2900
hotchips.org/wp-content/uploads/hc_archives/hc27/HC27.25-Tuesday-Epub/HC27.25.40-FPGAs-Epub/HC27.25.420-Stratx-1G-Hutton-Altera.pdf
hotchips.org/wp-content/uploads/hc_archives/hc29/HC29.22-Tuesday-Pub/HC29.22.50-FPGA-Pub/HC29.22.523-Hetro-Mod-Platform-Shumanrayev-Intel-Final.pdf
hotchips.org/archives/2010s/hc29/)
twitter.com/NSFWRedditVideo

Analog bits?

memes are real

DACs etc
altera.com/products/boards_and_kits/dev-kits/altera/max-10-fpga-development-kit.html

See? The money they threw at lobbying after they cancelled the national science competition was well spent.

Xilinx makes better fpgas anyway. I'm hoping we see an open fpga appear at some point but for now why screw around with altera when you can get a zynq board for cheap? I haven't tried their high end rf fpgas yet but I hear those are pimp.

>TFW all my Quartus software is intel-branded now

What the fuck has happened to Intel.

>the tweet is real
what a ridiculous coincidence

holy shit, some other fpga guys here. Isn't the article a bit off when it says:
>>The SerDes in the Stratix 10 MX does not appear to be able to support HBM so Intel has made a workaround.

I thought the serdes are only for fast serial connections, not what would be for a memory data bus.

Also who cares if Intel's 14nm may not be great for analog. different processes for different things. If Intel's new packaging technology lets them mix and match dies cheaply and reliably with good perf, good for them.

altera was better and still is, it might change now that intel got their jew-hands on them.

we meme'd it into reality

>serdes are only for fast serial connections, not what would be for a memory data bus.
They are not using an interconnect bus on interposer for HBM, that's their approach to MCMs with EMIB.

linleygroup.com/newsletters/newsletter_detail.php?num=5607

>The SiP can integrate up to 20 FPGA die with Intel’s Embedded Multi-die Interconnect Bridge (EMIB), which the x86 giant initially employed to link 28Gbps serdes in a 20nm process with the 14nm HyperFlex fabric

3dincites.com/2016/03/the-hbm-supply-chain-is-open-for-business/

>To enable fast data xfer across “long” runs on cheap boards HMC chose to use SERDES ( Serializer / Deserialzer ). But the HBM uses short parallel interconnect bus on the more expensive Si interposer. The former is good for servers ( intel ), line cards in network drivers etc. but the latter is better when modules must be used to save space etc. ( as in the AMD game system)

Guess intel doesn't want to bother with interposes.

>Also who cares if Intel's 14nm may not be great for analog. different processes for different things. If Intel's new packaging technology lets them mix and match dies cheaply and reliably with good perf, good for them

Thing is they belittled glue logic even though they transition to it themselves. Pure mockery for themselves for PR stunts. Not to mention calling Zeppelin a desktop die is like calling Skylake-X a desktop die.

>Not to mention calling Zeppelin a desktop die is like calling Skylake-X a desktop die.
>Intel called Xeon-D knockoff a desktop die
Never did I laugh so much in my life.

>Guess intel doesn't want to bother with interposes.

I read on another site that one problem with interposers is the semiconductor reticle limit. So basically instead having a giant interposer die that is big enough to bond down your main die and memory sitting beside it, intel just has these little tiny dies just under the edge of the main chip that hook it up to the neighboring die, and it is all glued down to ?fr4?. That would be important for a huge fpga, I guess, because those things get pretty huge at the top end.

>I read on another site that one problem with interposers is the semiconductor reticle limit.
TSMC has already fabbed ~1200mm^2 CoWoS for V100 using multi-exposure.

Thanks for those links. Yeah the SA article is off.

I can't wait for that HBM to come to the few hundred$ fpgas I work with.

That EMIB is sick. I wonder what their expected use case is. If they devote that much silicon area to that one piece of functionality they must have demand for something in particular.

One of the nvidia engineers at hotchips conference said their automation wasn't good enough to detect microcracks on the P100 interporser so they had to eyeball it. Not sure if things improved for V100 on the interposer tech.

>their
I don't remember nVidia owning any fabs.
And I don't remember any TSMC parner talking about problems with CoWoS.

Don't TSMC and nvidia have a special relationship, 12nm FFN tailored for them? I don't know if the engineer was talking about fab side or their side.

>Don't TSMC and nvidia have a special relationship, 12nm FFN tailored for them?
12FFN sounds like nVidia buying early wafers for 12FFC.
Only SoC vendors like Apple have "special" relationships with TSMC.
Besides, CoWoS is TSMC/ASE-standard.
It's not exclusive to nVidia.

Look at this QT.
1.2TB/s of BW.
God, I love HBM.

This is the talk I'm referring.
youtu.be/egE-VKoF4ZI?t=2900

I'll be the first to admit that 1200mm^2 is pretty damn big, but it is tiny compared to what could be fabricated if you can figure out how to assemble all the pieces.

I mean think about gluing four of the large Xeon SP dies together with a ring of HBM around that. 1200mm^2 just won't do that.

Ah, well, you don't need specific inspection tools to test simple slab of TSV-etched Si.
Though I'd rather prefed OSATs to push non-TSV packaging: SLIM/SLIT.
EMIB looks good, yes, but, BUT, ICF is such a mess I'd rather not believe in magick.

>don't talk to me or my wife's son.jpg

hotchips.org/wp-content/uploads/hc_archives/hc27/HC27.25-Tuesday-Epub/HC27.25.40-FPGAs-Epub/HC27.25.420-Stratx-1G-Hutton-Altera.pdf

FWIW the hotchips conference slides. Maybe it makes a lot of sense. Intel knows PCI-e and it looks like they talk that across the EMIB to the Serdes. (slide 4)

> Intel knows PCI-e and it looks like they talk that across the EMIB to the Serdes

Interesting no mention of it in this year's conference. They mention UIB and AIB which are presumable the interface chip charlie talks about. No doubt they are trying to hide it as much as they can

hotchips.org/wp-content/uploads/hc_archives/hc29/HC29.22-Tuesday-Pub/HC29.22.50-FPGA-Pub/HC29.22.523-Hetro-Mod-Platform-Shumanrayev-Intel-Final.pdf

(also this years videos are out hotchips.org/archives/2010s/hc29/)