You have to give it to AMD with Raven Ridge

you have to give it to AMD with Raven Ridge
>ZEN + Vega APU with HBM

this could be great for smaller laptops or very small htpc

will intel even be able to compete with this?

Other urls found in this thread:

images.anandtech.com/doci/10944/dxva-checker.png
twitter.com/NSFWRedditImage

>Implying it will not be poo

If it's really HBM2 and not DDR4 only (or GDDR5 for everything), it might actuallt be closer to 1070 performance than 480. 256 GB/s would be wasted on anything less, and the thermal constraints shouldn't be quite so tight.

>Implying it will

Lol not a fucking chance. The shader count puts in line with a 460, assuming no improvements over polaris. Obviously vega CU's will be superior, but we don't know by how much. At bedt I'd wager 480 level performance.

Which is still badass. That's ps4 pro level graphics power coupled with a broadwell-e level cpu.

The HBM is a given. The fury was a flashy advertisement for the technology but its real purpose is to remove the memory bottleneck current APU's suffer. Otherwise the only option would be massive caches within the chip itself which would be pretty retarded.

What interests me most is there's no memory standard listed, just HBM2. Why ommit this information when it's listed for all the others?

My guess (and hope, honestly) is the HBM is a shared memory pool for the system and the graphics processor, eliminating the need for dedicated system memory.

Imagine an APU with as much horsepower as an i5 6500 and an rx 480 with massive memory bandwidth for both and the mobo is nothing more than a mounting surface for the chip, I/O, audio, and power delivery.

Basically a console killer you can cram in a mac mini sized case.

Haven't heard anything about raven ridge APU being vega-tier

I suspect it'll be like, 470 or 480 max.

I don't think there will be APUs with shader above 1024 and hbm2 for the mainstream market anytime soon. It would be too expensive for oems to market. AMD could market it as a premium product in the same way as skull canyon was to Intel.

Can HBM2 replace system memory? Because that would be killer if you had just 8 GB of HBM2 right on the die and no other system memory

They might use HBM 1 on it to keep costs down. But I don't honestly believe they are going to use HBM as system memory.

I feel like people aren't really appreciating what this could mean for the pc world

this is literally revolutionary

I've still yet to see a legit source to prove that Zen APUs will indeed support shared HBM

1 stack of hbm2 would literally shit on everything intel has to offer without a discrete card, even if its 50$ more on the cpu... but at the same time, it would cannibalize sales for any manufactures higher end products, you know... I think I just understood the reason why something so obvious has not happened yet.

not vega tier, but it looks like it uses vega as a backbone rather than polaris.

>I've still yet to see a legit source to prove that Zen APUs will indeed support shared HBM
doesn't need to be shared, just needs to be.

If HBM is on the APU line and its paired with a dedicated video card it could also boost performance when developers get the multi GPU support going in dx12/Vulkan.

I know its not going to happen anytime soon but, if ever, but I would like to see a hex core APU.

hnnnnnnnng

>zen+vega hbm apu

this would be way too expensive

images.anandtech.com/doci/10944/dxva-checker.png

AYYMD can't compete in HTPC at all with their outdated, obsolete 4K HEVC decoder and useless VP9 hybrid/partial decoding when Intel supports full fixed function 8K HEVC Main10 & VP9 Profile2 hardware decoding

>Starts at 35w TDP
It's still going to be a big deal but even at best you can only expect around rx460 performance paired with a higher end laptop CPU. That said, it has the capacity to destroy the mid range laptop market and be the only reasonable choice for an HTPC that isn't garbage. I'm really excited for it either way but expecting rx480+6500 perf is overshooting even a high estimate of what's possible.

Imagine the overhead cost of pairing anything with a discrete GPU. The costs of packaging this all together won't even come close to that. This is literally what the cheap as fuck consoles have been set up with all this time and we know what the prices on those look like. Yeah, HBM2 might ride the costs a bit but only dealing with a single chip can drastically reduce the price of whatever packs one of these in comparison.

No, the HBM2 is only for the GPU. The cpu may use it in small amounts but normal system memory is going to be around simply because of how memory intensive running a new OS and browsing the web is. It wouldn't be appropriate for something meant to take the midrange laptop and lower end PC market to have 8GB of HBM2 considering how much it would cost.

People looking for HTPC often only want motion interpolation and a multistage upscale though. Something that 8K HEVC doesn't help with much considering most people are looking to just replay 1080p/720p source material upscaled.

> 4W AyyMD
I wonder how it will perform compared to the Core-M

>this is literally revolutionary

Dude, we are talking about somewhat faster gaymen devices. It's nice to have but mankind will not look back to it later as if it were "revolutionary".

Also, you overdone it shill. GTFO.

I wonder how it will perform compared to the Raspberry pi 3

I don't see any reason why it couldn't. The xbox 360, xbone, ps4, and ps4 pro all feature a pooled memory bank (as far as I know always being some version of gddr) shared by the cpu and gpu. Current AMD APU's leech off of system ram, which happens to be their greatest bottleneck.

There are some things we must keep in mind, however.
1. A 35w maximum is pretty anemic. I was a bit boisterous with my "i5 6500" claim. While HBM draws next to nothing, these parts would have to be either extremely highly binned and undervolted or severly underclocked. Likewise, 8gb total is not that impressive regardless of bandwidth.

2. It's unknown how small a 4core zen part could potentially be, let alone the vega CU portion. We do know, however, that vega still has separate modules mounted upon an interposer, albeit more compact than the fury. We also know this interposer is costly and necessary. HBM requires many complex traces of short, equal distance. Though it's possible they can share a lid.

3. From the above we can deduce it may be difficult to make the parts interchangeable. Interposers are fragile enough you can easily crack them from removing the cooler on a fury. The motherboard would also have to facilitate power regulation for the cpu, gpu, and memory portions respectively. This means AM4 compatibility is unlikely and to the best of my knowledge AMD has never said it would be.

4. Being proper pc parts, it's conceivable there could be a variety of configurations. Those with memory dimms and pcie lanes available. Afterall, gpu's aren't the only purpose fore pcie lanes, and additional memory potential could be nice, regardless of speed penalties.

5. This would not be as inexpensive as a console. This would be a great deal more complex with brand new hardware. Though there's potential this configuration could match or undercut separate hardware. More likely is a premium associated with materials and its unique position in the market.
(cnt. 'Cause this is neat)

>but mankind will not look back to it later as if it were "revolutionary".
same thing was said about cinema
it was just entertainment for masses
you realize mainstream video games are younger than mute films were back then?

6. It's completely possible these will largely if not completely be OEM-focused parts. Afterall, this is where AMD stands to gain steady, long-term cashflow. In addition costs can be subsidized by large orders and potentially in the form of business agreements to offer other AMD parts. OEM's would especially enjoy the simplicity of building all in one's around these. Significantly less work would have to go into specialized, one time configurations.

7. These APU's would be in their own market. No single other company can do something like this. Nvidia and intel would have to either work in partnership or license each other's technology to compete. In the latter case they would have a lot of ground to cover respectively.

8. Configurations would be quite limited. While Zen and GCN were designed with flexible scalability in mind, there are costs and risks associated with wide scale offerings. It's not like with a cpu or gpu individually where imperfect parts can be salvaged. There are many more steps involved and there will be issues with yields to be considered.

9. These motherfuckers have been leading up to this point for ages. AMD has all the pieces in place to create a unique marketplace all to themselves. Highly scalable architectures, HBM, securing the console market, driving the industry towards flexible low-level API's, bringing freesync to the HDMI 2.1 standard, a focus on APU's over the past several years. It's just too perfect for them.

And finally, this could all still fail drastically. If it turns out the ends don't justify the means, if the costs are too high and yields are too low, if the interest from consumers just isn't there none of this shit matters. They'll have put forth all this effort to be forever positioned as "second best," the "budget brand."

Here's to high hopes for the underdog.

35w is anaemic, but a 65w tdp is much more feasible as far as a semi-decent cpu/gpu setup, and that's what current desktop apus run at.

>This means AM4 compatibility

well that's the platform bristol ridge apus already use...

These are almost the exact same conclusions I came to looking at the information available. Good to see someone else thinking the same way.

Although I really just see this being a giant blow to the mid range laptop market for the moment since it could greatly simplify the designs and free up a ton of space, which is something OEM's desperately want and need. Although I expect MOBA gaming to be a larger driver for their market, if creative work can be made possible at 35-65w, it would almost assure these going into the 13" macbook refresh and other ultrathin "pro" laptops too.

it's a shame that they couldn't bring Excavator to a 20nm node.

>will intel even be able to compete with this?
they are trying establish "Optane memory" as an additional cache, but it's only a patch.

Optane is a meme for anything consumer.

Optane isn't really even a "consumer" oriented tech. It's neat but not really advantageous over normal SSD's (in particular the fast m2's) for 99% of people. It's primary use is going to be limited to some workstation loads and database operations, sadly.

I would love to see a 65w part but if they've stated anything official I haven't seen it. Obviously a 35w APU ain't gonna replace gaming computers nor consoles any time soon. It's not going to disrupt the market and usher in a new age in the industry.

But it's a starting point. As processes mature and demand arises (provided it's as cool to OEM's as it is to me) we'll start to see more impressive options. I obviously wouldn't give up the tried and true desktop formula of CPU+dGPU, nor would the majority of the rest of Sup Forums's denizens, but I'm so excited at the possibilities.

Call me an AMD fanboy if you want, but it feels like there's finally something NEW in the consumer desktop space again. Not one new thing but all these new things that have been ignored because AMD doesn't know how -and can't afford- to fucking market anything. Even if they fail completely, at least SOMEONE's trying around here.

>with HBM
Not going to happen, not in 2017 at least.

>35w
op's slide is for mobile, the desktop variant is from 35-95W

>ZEN + Vega APU with HBM
FINALLY!!! take my money pls AMD

Soon™


Keep your hopes up, though

Considering the SATA 6Gbit ports and GBit ethernet the Raspberry Pi is not even in the same league. If you don't give a shit about that then be happy withy your raspberry pi.

>95w TDP
4 Zen cores @ 3.6/4.0 = 48w
(based on 8c = 95w)
>47w left for GPU + HBM2
7w for the HBM2
(HBM2 uses same voltages as HBM, only improved bandwidth and package capacity, numbers are from hbm wattage in fury x)
>41w left for GPU
Even with improvements from Vega, and lower voltages and clocks compared to desktop models, we're still looking at roughly rx 460 performance at best since the rx 460 die can pull around 60w on it's own. I'd assume they can get similar performance at 40w on Vega, and if not it will probably come from dropping the clocks on the 4 Zen cores to something like 3.0/3.6. There could possibly be some power usage improvement if AMD can reuse some of the Zen sections for managing the GPU section. It wouldn't be much if they did, but just remember that even on Polaris because of the 14nm process dropping the voltages and clocks down ~20% can give you almost ~33% lower power draw. If Vega is much more powerful per clock than Polaris, it could easily surpass rx 460 performance at only 40w,

I wouldn't be surprised if 3D Xpoint + NAND would result in almost indestructible SSDs. No write amplification. Writing to the same memory area doesn't degrade the SSD. High performance even if the SSD is full. No need for a supercapacitor to keep the firmware from bricking on powerloss and zero loss of data. Basically the only thing that will affect the lifetime of the SSD will be the number of write cycles which is irrelevant for consumers and datacenters write cycles are not relevant because they can just buy more SSDs. The only remaining problems are archiving and RAID with SSDs is a bit problematic e.g. first SSD exhausts write cycles. Second SSD exhausts write cycles shortly after that.

Doubtful. A DRAM buffer works well enough for now, and xpoint pricing is targeted at the high density server modules, not consumer memory (the 900 USD for 64 GB ones). From my understanding the controllers for it are very power hungry and expensive, and I wouldn't expect that to change anytime soon. Also, xpoint isn't so much desired for throughput, but for low latency, and it's comparable density to NAND.