HBMeme BTFO again

wccftech.com/micron-gddr6-memory-qualification-complete-mass-production-1h-2018/

Other urls found in this thread:

skhynix.com/static/filedata/fileDownload.do?seq=452
twitter.com/SFWRedditVideos

>still larger die and board area taken up
>still more power used
>only very slightly faster now
HBM2 was only a failure because of Hynix being a piece of shit and failing to meet clock targets.

'muh games

HBM is a expensive shit

HBM2, expensive overrated poor yielding meme that can't keep up with the giant volumes of mass market consumer cards

Nvidia can happily use GDDR6 because their GPUs are extremely power efficient while AYYMD HOUSEFIRES can't because it would eat their power budget which is already bursting at the seams even with HBM2

Oh shit what have you done? Pajeet defense force incoming.

That was when only Samsung had early production for AMD. Since then both companies have ramped up full production so you can expect like 4-8x greater production.

4 stacks of HBM2 at rated speeds should still be faster

isn't the main issue with hbm2 the shit latency, which makes it terrible for the games consumer gpus are targeted at?

HBM was developed under Joe Macri, a white man. GDDR6 was developed by Koreans. Your pick m8

the day your shit company goes bankrupt is the day i hope the shackles of ignorance are finally lifted from you stupid drones.

>isn't the main issue with hbm2 the shit latency, which makes it terrible for the games consumer gpus are targeted at?
nigger, hbm2 2 stacked - 4 stacked is faster than gddr5, gddr5x, and even gddr6 which isn't even out yet. hbm2 problem has been the same with hbm1, and that's yields and failure to meet the targeted clock rate because hynix can't into process. its price is only high due to the low yields. though with both samsung and hynix pumping out hbm2, the yields have improved and cost has gone down.

outside of that the only other costly thing about using hbm is the interposer. the interposer on vega costs more than the 2 hbm2 chips on it.

That will be great for making GPUs on their side cheaper.

But at same time it means you can't pack both the memory itself VRMs closer to the GPU itself and that makes things more inefficient by adding latency and power leakage.

What I'm saying is pretty much useless for 99% people though it's just that HBM helped with overclocking without having said feature in mind to begin with.

For example look at pic and all that delicious industrial quality VRMs arranged in phases, sitting comfy and snugly all around the GPU.
Sadly we will never see this on the average GTX card since it makes the prices climb up faster than a horny monkey on fire.

Even if they wanted to die, they couldn't. They are the only comp for Nvidia and Intel. The govt would just step in and force them back to life because "No monopolies"

Sadly, the cancer will make HBM win on the end.
Apple will start to push for it hard because that will allow em to do even thinner shit, and the rest of the industry will follow, even if it is a bad idea.

also, hence why nvidia has only been using it on their professional cards while keeping shit ggdr on their shit mainstream cards. the benefits of hbm on shit mainstream cards isn't worth the price as the gpu's are usually the bottleneck before the ram. better off upping core clock speeds and using good enough money to have much higher profit margins.

amd on the other hand had to use hbm2 to help reduce power consumption and help their overall performance to give them that nudge they needed. example is the vega 56 vs 1070. its generally considered vega 56 reference at $399 mrsp is about as fast as a beefy overclocked 1070 that would cost in the $450 - $480 range. if amd had used say high clocked gdd5x it would most likely be another 480 vs 1060. as fast as a stock 1070 or slower. if amd used gddr5x power consumption would have gone up and overall performance of vega would have been reduced. if amd was able to reach similar frequency speeds as pascal with vega, as in 1.8-2ghz range, they could have gotten away with using gddr5(x) from a performance perspective but they still would have needed it for the power saving. so either way they needed hbm2.

But HBM is a fantastic idea, the issue is the implementation which in this case means the manufacturing process.

>The govt would just step in and force them back to life because "No monopolies"
yeah, no. government in the late 90's - early 2000's couldn't even break up microsoft and now under the current administration, let alone current congress, no way in hell would they go after intel or "help" amd. even if the democrats take back control of congress, which is highly doubtful for them to retake both, they still wouldn't do shit as amd has zero friends while intel has A LOT of friends in congress. and A LOT of money to "donate" to their "non-profits" and campaigns.

>its generally considered vega 56 reference at $399 mrsp is about as fast as a beefy overclocked 1070 that would cost in the $450 - $480 range. if amd had used say high clocked gdd5x it would most likely be another 480 vs 1060. as fast as a stock 1070 or slower.

Nah, it would actually be fine. Vega isn't bandwidth starved at all and even if it were they could just turn on DSBR to save a ton of bandwidth for a small performance penalty that would still put it above the 1070.

>Apple will start to push for it hard because that will allow em to do even thinner shit
What does that even mean?

>Vega isn't bandwidth starved at all
yeah, no. you can easily net a 5-10% increase in performance just from overclocking hbm2 from 800mhz to 900-1000mhz alone.

...

>Here’s a look at FireStrike 1080 graphics score increments as we overclock the card. Stock, we’re at 18816 points, for an AVG FPS of 90 in GT1. Increasing power target by 50% boosts us to 21188 – no other changes – and our power consumption goes from about 196W to about 300W at the PCIe cables. We’ll talk about power more in a moment. Anyway, that’s a gain of 12.6% performance from the power target offset. That’s not linear to all games, of course, but is significant here.
>If we overclock HBM2 and offset the power target, we end up about 3.6% boosted at 950MHz over just the power target offset. That’s not a bad gain from HBM2 only. Overclocking to 980MHz HBM2 and with a 10% offset on core – because manual input didn’t hold – boosts us 6.4% over the power offset with no HBM2 overclock, or nearly 19.6% over stock. The memory overclock and power offset alone get us to 19.2% over stock, and the power offset gets us 12.6% over stock.
>The take-away here is that HBM2 overclocking and the power offset are far more important than core overclocking on this very limited version of Vega. Core overclocking isn’t worth it for most users, though we can recommend considering the power offset approach.
>>The take-away here is that HBM2 overclocking and the power offset are far more important than core overclocking

Just think, being able to make a system where EVERYTHING is integrated on the same die.
Where the pcb is literally the chip and components to feed power to it.
That would give apple a lot of headroom.

>intel is shipping FPGA with HBM
>nvidia is using HBM
HBM is the future.

Only on ultra highend GPU

Which isn't from bandwidth because devs have gone on record saying DSBR, which reduces bandwidth by as much as 30%, aka 143% more effective bandwidth, show negligible improvements. HBM overclocks showing improvements are coming from latency or infinity fabric fuckery. In any case it's a small improvement purely from the fact that it's driving data faster overall, not because it's a bottleneck being resolved. Even if you have a bottleneck somewhere reducing the total time to render can improve performance, it just means improvements elsewhere produce diminishing returns, which is exactly what's happening with memory overclocks.

In your own graph it shows a 5% improvement going from 800MHz to 950MHz. That's nearly a 20% higher memory clocks yielding a measly 5% improvement. It's exactly the scenario I described, diminishing returns. Raising the power target gives the most improvement because Vega at stock runs like 1300, which is exactly what mine does, but runs ~1450mhz with +50% power. As you can see on your chart, it shows a near 1:1 improvement. 13% improvement in performance from a 12-13% improvement in clocks.

>>The take-away here is that HBM2 overclocking and the power offset are far more important than core overclocking on this very limited version of Vega. Core overclocking isn’t worth it for most users, though we can recommend considering the power offset approach.
>>>The take-away here is that HBM2 overclocking and the power offset are far more important than core overclocking
They're straight up wrong about the memory overclocking. A 19% improvement in memory frequency yielding 6% gains is negligible.

The reason why core overclocking doesn't matter is because Vega strictly throttles on power and if you leave power target at stock it will never come close to the rated frequency. When you raise power target the GPU stops being throttles and it's allowed to hit its frequency. You can see this just by watching your overclocking software while running benches. If they didn't notice that they have no business reviewing GPUs. Frequency is absolutely what matters most, it just can't hit the frequency without being given a ton of power because of the way AMD's boost/throttle algorithm works now.

You can see the difference the +50% power target nets in frequency, it was in unigine super position, just the first scene to get speeds.

And if you notice the states 1460 isn't even close to the 2nd highest state much less the highest state. AMD throttles their cards based on power somehow, I assume on current detected going through the VRM since voltage fluctuates at around 1.1V

AMD messed up by using HBM.
If they used ddr5x, they could have used like 12-16gb and sold cards based on that.
Dumb people buy GPUs with big numbers like the 8gb 290x ex.

Power usage doesn't seen to matter for AMD fans. So the extra 40w doesn't matter when ur over 300w already.

Pascal does the same thing.
Is you set the power target to 150% clocks go up 100mhz+
My gtx1080 will clock to 2,012mhz all by itself at 150% PWR.

Then why didn't they go with microns hybrid memory cube technology?

This

clock rate =/= latency

>>only very slightly faster now
good

hbm2 was a absolute meme so glad i dodged volta/vega

As for miners...
It's profitable to the tune of 8 USD per day for monero.
Shit's fucked and AMD won't make more because of fear they'll get nailed with extra stock just like when ASICs rolled BTC.

good i cant wait for home mining to crash even more its fucking retarded from the start

It will crash if you make GPUs plentiful.

The shortage makes it even more profitable.

nvidia use gddr on their goy-tier cards because it's cheaper

their high end stuff is HBM2

AMD are obsessed with pushing it to the masses though

>tfw nVidia uses HBM2 as well on the highend, because they know GDDR6 is going to eat more power than GDDR5 before it.
AMD still shit though.

do miners need ram or is that because of the smartphone bubble?

Latency.
GCN has always been *very* latency sensitive.
Google the stilts posts on it for BTC mining (back when GPU BTC mining was still a thing)

LTC and similar coins need lots of ram (until POS kills the miners)

POS?

i never kept up with mining shite

Proof-of-stake.
It replaces proof-of-work (which requires mining as 'work')

>technology is bad because its initial implementations aren't perfect

calm your tits, Sup Forums

Too late to the party, OP

Because it was shit.
And silly expensive.

GPUs *can't* be latency-sensetive.
They hide it very, very well.

The initial implementation IS perfect, hence why HBM is the only TSV memory that survived.

>HBM
why would they use it? it's not that fast, it's expensive and looks like it's hard to make

The highest pj/byte ratio of any memory on the market.

Damn you’re a moron. That shit flew right over your head.

latency,
b/w,
density,
power consumption,
packaging,
integration,
and a shitload of other things.

>Damn you’re a moron. That shit flew right over your head.
?

The GDDR6 is cheaper for the same job

Ok, so, GDDR6 fits on die? with direct low latency link to the processor?
The GDDR6 consumes as little power as HBM?
The GDDR6 has the same density?

Each has it's own advantages. HBM is still an extremely new thing, so of course fabricating it is expensive.

So Micron finally increased DIMM production?

$$$$$

>Nvidia can happily use GDDR6 because their GPUs are extremely power efficient

Nvidia use GDDR because it's cheaper. Their cards for non-poorfags use HBM2 because it's better. Stay deluded, gaymer scum.

Nvidias most high end, premium and exclusive gpu the titan v uses HBM

face it HBM is only for high end, because gddr is inherently low end and always will be poorfag

HBM for poorfags (by Samsung) is also currently in development.
>HBM is still an extremely new thing
JEDEC-235 is, well, ~4 years old.

GDDR6 is cheaper than HBM in terms of mass production. HBM won't be a thing until after next gen. It will be reserved for research cards.

It's not BTFO.

GDDR has its own set of issues (Power consumption, PCB tracing/signaling issues and soon yielding issues). GDDR6 is going to just as expensive if not more expensive then HBM. GDDR5x isn't exactly that common which is why Nvidia has shifted its current GP104 chips for GDDR5 and allocate its GDDR5x stock towards upcoming Volta SKUs.

HBM is the future for ultra-high bandwidth needs. The problem is that mid-range and low-end GPUs aren't bandwidth starved yet with GDDR5 chips.

GDDR5 just has the advantage of being mature technology that is cheap and easy to make en mass.

>GDDR6 is cheaper than HBM in terms of mass production.
Which one? 16Gbps/16Gb chips will currently cost you a kidney and your soul will be sold to Samsung.
>HBM won't be a thing until after next gen.
Vega is current gen.
>It will be reserved for research cards.
Vega10 is a consumer-oriented die.
>It's not BTFO.
Memory standard don't get "BTFO".
>HBM is the future for ultra-high bandwidth needs.
It's also the future for even the peanuts-priced devices, assuming Samshit would not fuck the pooch.

Also didn't Hynix update it's HBM catalog recently?
I've seen 307GB/s::1.2V and 256GB/s::1.2V stacks somewhere.

>skhynix.com/static/filedata/fileDownload.do?seq=452
Yes, they totally did!

GDDR6 is going to be just as expensive for different reasons. (Yielding, extra tracing/PCB to meet those insane clockspeeds)

We already seen this with GDDR5X which are only limited to 1080, 1080Ti and Pascal Titan family. Nvidia is quietly discontinuing the 1080 for 1070Ti because GDDR5x is "too expensive" ala hurting their profit margins and GP104 never really needed it in the first place as seen with 1070Ti being practically identical to 1080 in terms of performance.

The actual problem with HBM2 is 2.5D intergration.
The moment at least one OSAT gets rid of Si interposers is the day for HBM2 to become slightly more convinient.