Nvidia on suicide watch, monster custom APU from AMD and Intel with HBM

Nvidia on suicide watch, monster custom APU from AMD and Intel with HBM


newsroom.intel.com/editorials/new-intel-core-processor-combine-high-performance-cpu-discrete-graphics-sleek-thin-devices/

Other urls found in this thread:

laptopmedia.com/review/asus-rog-strix-gl553ve-review-do-the-extra-features-justify-the-higher-price/
youtube.com/embed/gaHs_guCp2o?rel=0
design-reuse.com/articles/42985/the-battle-of-data-center-interconnect-fabric.html
design-reuse.com/articles/10496/a-comparison-of-network-on-chip-and-busses.html
arteris.com/blog/bid/99797/The-SoC-Interconnect-Fabric-A-Brief-History
rambus.com/memory-and-interfaces/serdes/
rambus.com/memory-and-interfaces/serdes/56g-phy/
twitter.com/SFWRedditImages

so nvidia can kiss its mobile cards goodbye?

Is Intel trying to build a mobile Hindenburg?
Or did AMD strike gold by building a very quick yet low-power GPU?

It's always nice to see Intel admit their GPUs are worthless.

DELET DIS

AYYMD HOUSEFIRES GARBAGE is never good

Nvidiots can only dream about RX560 performance in a XPS 13

laptopmedia.com/review/asus-rog-strix-gl553ve-review-do-the-extra-features-justify-the-higher-price/


Get rekt AMD drone

>3kg housefire POS
>15"

Video

youtube.com/embed/gaHs_guCp2o?rel=0

Wait? They still get people from 2001 to design laptops? The fuck is that bulky shit?

HOW CAN NVIDIOTS EVER RECOVER??? AMD+INTEL MASTER RACE INCOMING

>AMTEL

Sounds cool.

The market this is targeting (30W+ board power) is 95% owned by Nvidia, and AMD has no mobile Vega chips for this market.

There is no demerit to this for AMD, they can only gain marketshare and recognition by piggybacking off Intel
And Intel can finally have a GPU that's not complete garbage.

Huang probably shitting his leather pants right now

VEGA is not power efficient, so this will not be power efficient like Volta GPUs, 50% more power efficient than Pascal GPUs which is already signifinicantly more power efficient than anything AYYMD HOUSEFIRES have

Only Apple will be dumb enough to ship this HOUSEFIRES because they asked Intel for it

>Or did AMD strike gold by building a very quick yet low-power GPU?

It's Vega. Semi-custom.

...

Ye.
Though Intel can kiss their 15W SKUs goodbye.

AMD will have their own version of this in 2019 or 2020 on 7nm, but in the meantime I guess it's time to take a dump on Nvidia with Intel

Holy shit Jensen I do not envy you.
TCI memory can't come soon enough.

...

...

For compute Vega is more efficient. But the CUs are held back by a poor front end. NGG will help a lot in games thanks to the early culling path that's mentioned in the whitepaper.

Haha, so the collaboration was true, I guess this mostly for Apple but other OEMs will use it too according tot he press release.

If this is true I guess the TSLA one is on the table too.

JENSEN
AND
HIS
JACKETS
ARE LITERALLY
ON SUICIDE WATCH

I never asked for this

All three CTOs are probably laughing their asses at nVidia.
Btw Intel should license out EMIB because their morbid failure of a fabs is not fun.

NOOOOOOO. PLZ BUY NVIDIA GUYS. THE WAY ITS MEANT TO BE PLAYED™

If it has HBM then we can expect performance in the rx480 range, according to the pictures, the Intel die is around 130mm2, the GPU is over twice the size, probably close to 300mm^2

Which would give AMD around 2600 Vega shaders?

>Nvidia tweets about Zen and pascal being the perfect pair
>AMD agrees
>stunning betrayal as AMD joins the Jews to destroy Nvidia

Nvidia doesn't make CPUs(worth a shit), they'll promote builds with both AMD and Intel CPUs when needed.

It's Vega11, literally half the Vega10.
Just simply uses EMIB that makes 2.5D integration silly cheap.
Like kills GDDR6 on the spot.

Makes sense if it's Vega 11, should be faster than a 580 then

Gotta find my 1.7% bucket.
Yes, should RTG enable NGG and deferred.

this has the smell of fruit about it desu

sounds like the perfect way to cram non-shit graphics into macbook pros

ok.

By that logic, it's going to kill Ryzen mobile APUs as well.

EMIB is going to be in a higher performance bracket, I suspect, and not really compete with either. It's going to go where you needed discrete options before.

It's not, it's a competitor to the usual CPU + MXM GPU setup.
The design is "Palo Alto" (reused codename, huh), 24CUs, single 800MHz HBM2 stack.

What's the estimated TDP?

Probably around 60W

28-35W for CPU (if it's HQ or whatever model) and whatever for GPU, its 1.5K ALUs.

Everyone hates nVidia, but this is absolutely silly.

What's silly?

This is "hell is freezing over" kind of HAPPENING.

Implying semiconductors scale linearly.

>silly
will be used in tablets (the new "surface" shit for sure) and ultra thin laptops. i believe it won't it won't change the market that much since it's probably more expensive than "discrete components" for anyone to worry (nvidiots) but it's an alright idea. make a laptop with this, 8gb of soldered ram and an empty dimm slot for "user upgrade capability" and you got yourself a pretty decent mobile shitposting machine
meh, there will be much more interesting stuff next year :^)

Its out of tablet range, 1.5k ALUs.

Is Intel moving away from monolithic dies entirely?
Is this the beginning of a mix and match in the mainstream taking over? Will EMIB be everywhere?

CES launch

Not EMIB but it is the beginning of a new era.
With AMD's Infinity Fabric and Intel's EMIB you can now have multiple dies on a single package with good performance.

Infinity fabric has nothing to do with it

IF is heavily used in Vega for off die and internal communication, AMD isn't changing that for Intel

This is the future of computing. Nvidia is done for. No one wants to partner with them.

IF begin using memory control on vega gpu.

Now nvidia need go full Volta :)

Why isn't AMD doing this with their own CPUs?

No, no and no.
It uses EMIB to integrate HBM2.
IF in Vega never spills off-die.

AMD doesn't have the money or time to market to get this thing out at the moment.
Maybe on 7nm, when they're not launching chips every 2 months

No it's not. It's used for internal communication between the blocks. And theyre certain not using it between the GPU and CPU here, it's completely unnecessary

Aren't they gonna release HBM APUs sometime late next year?

>tablet range
>1.5k ALUs
what did he man by this?

No, unless it's some console design which I doubt.
APUs don't need HBM2, the ALU count is too small.

This. Apple wants better graphics but not they hate Nvidia.
They will market it as "same graphics technology that powers our revolitionary iMac Pro now in a 1cm thin blablabla"

RR in the low end. Intel + Radeon in the high end. Nvidia on the desktop. Nvidia must be worried :)

Why would they include an iGPU on the CPU for something like this? Passthru? Otherwise seems like it would be better to stick in some more efficient CPU without the iGPU taking up extra wattage.

Do you know what power gating is?

...

>now with housefire temps AND finnicky gourmet RAM preferences!

Nvidia and Intel hate each other much more than they even care about AMD's existence.

Yes I know the general details, can they essentially just "turn off" the iGPU and stop it drawing power at all? I thought there was always a little bit of power sipped during standby. I just don't have a very good view of Intel iGPUs is all.

How much will CPUs benefit from having five times the memory bandwidth?

You do understand that Jensen holds a personal grudge against AMD?

The CPU has no access to GPU memory pool.

Infinity Fabric and EMIB are two ways to do similar things, but the method is apples and oranges.
IF is higher level and sits directly on the PCB. EMIB changes is how the chip is actually packaged. There's a physical bridge built into the substrate, not on top and I don't think there's higher logic directing the show outside of point-to-point comms.

IF is a protocol you moron.
It's transport layer agnostic and can use fucking mail pidgeons.
EMIB is Si interposer for poor.

All of these interconnects are just generic off the shelf SOC Phy IP that can be licensed from various manufacturers and customized per one's protocols that are inline with a standard...

> design-reuse.com/articles/42985/the-battle-of-data-center-interconnect-fabric.html
> design-reuse.com/articles/10496/a-comparison-of-network-on-chip-and-busses.html
> arteris.com/blog/bid/99797/The-SoC-Interconnect-Fabric-A-Brief-History

There's nothing really special about this stuff. It's bandwidth specd multi-protocol IP that you can license, customize, and drop onto your SOC or die...
> rambus.com/memory-and-interfaces/serdes/
> rambus.com/memory-and-interfaces/serdes/56g-phy/

This is why AMD like just about everyone else can re-purpose segments of it for PCIE3.0. GPUs work the same way... They have a core interconnect network, internal submodule network, and PCIE 3.0 leg.. All of it is just mutli-protocol phy that runs at certain speeds. I recall AMD's interconnect runs at about 36Gbps which is mid level phy. For reference, when stacked, there are phys that can run at 1Tb/s. Everyone has their own custom protocol (AMD,Nvidia, Intel, ARM) that runs internally... When you connect it to 3rd party modules you convert it to industry standard protocols like pcie3.0/4.0 .. So, either AMD is licensing a portion of the IP to intel or they're simple giving them a pcie3.0/4.0 connect to their GPU IP.

Except that's wrong. Slower clocked gpus with the same configuration needed as much as >100GB/s. With dual channel DDR4 and shared bandwidth the ugly probably gets 1/8 the bandwidth needed

>Intel house fire™ CPU, now with AMD housefire™ GPU!
100w chip conformed?

Or they are using OPIO link to the GPU.

>joao in complete desperation
LMAO

kek

...

That's like saying samsung or micron has betrayed them for making ram for another company. Unless they're directly competing in products I doubt they care. It's not like nvidia was shilling radeon gpus.

>OPIO link
Possibly. I consider all of these interconnects to be the same fundamental stuff at the end of the day that has so many flavors because companies like to spin/patent their own proprietary junk when not compelled to adhere to generic industry standards/protocols. With respect to infinity fabric, I think people forget this. It's just speed rated phy with some proprietary pixie dust. Getting bits from point A to point B reliably, at low power, at high throughput, and low latency using industry standard multi-protocol phy.

DESU, companies have purposely held the generic progress this stuff can make because they want to protect their market share and profit margins.
> PCIE 1.0,2.0,[3.0],4.0,5.0
There are 4.0/5.0 products out there but it rides a carefully scripted roadmap so profits can be milked. AMD is going to slowly roll into and milk HSA. Although, functionally it exists today just at a little higher latency.

DEGLUE THIS

Why won't intel also use AMD CPUs then?

Huh, turns out Canard and that stupid fuck Kyle at Hardocp were correct.

They both were merely quoting japs.

Amd should just hyperlinked into Intel CPU desu

How the fuck are they going to cool this? Theyre going to have to design completely new heatsinks. Will these chips be only for mobile?

bone conductive headphones

This isn't an APU, but rather a dedicated gpu that shares a package with the cpu to reduce space. These will never come to the desktop market as it it pointless.

>Intel HD and Vega software team will have to work together

kek, i bet the windows default vga drivers are more stable that whatever thing those two team can shit up

>Vega software team will have to work together
But RTG's software team did not change with Vega.

They are doing this to probe/steal their infinity fabric bs.

>LOL NVIDIA DONE
Isn't it kind of telling intel and MAD need to work together to pressure nVidia at all? Oh jeez, nVidia might have to open their vault of money for some RnD!

>might have to open their vault of money for some RnD!
Like, bigger dies?
Oh, tried that, 1.7% yields achieved.

Maybe amd will come up with some groundbreaking new manufacturing process!

kek

Like, multi-chip modules?