AMD Radeon RX Vega Specs Leaked In Linux Patch

wccftech.com/amd-radeon-vega-specs-leaked-linux-patch/

So Vega 4096 cores = 570 crossfire (2 x 2048 cores) Which is = 1070 performance as leaked 3dmark claim

Other urls found in this thread:

videocardz.com/68948/skhynix-gddr6-for-high-end-graphics-card-in-early-2018
twitter.com/SFWRedditGifs

tldr

worse than nvidia

Uhh.. We've known the 64CU configuration for over a year.

AAYYYYMAAAAD IS FINISHED AND BANKRUPT

They gonna release a $300 card or what?

1070 can't do BF1 or Battlefront at 4k60fps

the leak is either garbage or small Vega

still wont buy nvidia because freesync

Even that's too much cores, Vega needs to fix the bottlenecks in Fiji and raise clocks, that's way more important than letting shaders sit idle

>same amount of cores as 28nm Fury X
>twice the transistors

Really makes you think

>buy AMD CPU
get haswell 4 years late
>buy AMD GPU
get 1070 1 year late

well at least it's cheaper and they are the good guys right? because multi-billion corporations can be saints. smdh

Show me that 8 core 16 thread 65 watt haswell, pls.

yeah, a big corporation can be Good and they often are

curse this shitty society for turning its youths into jaded cynics

>buy 1070
>Fury X + 10% performance 1 year late

The 1070 is such a shit GPU for anyone that has a 980/390X or higher from last gen

If they can deliver a 1070 equivalent for a price/performance ratio that doesn't hover around "fucking awful" it'll be a win for gamers. And I really doubt even this newest "driver leak" is going to be 100% accurate or have all information.

not having to buy a shitty g-sync monitor is a win/win though.

All-new frondtend. I wonder how it'll perform in games.

There's no 1070 Vega competitor, Vega is 15% bigger than GP102

They can do both.

They did both. Also what the fuck are primitive shaders.

>Vega 4096 cores = 570 crossfire (2 x 2048 cores)
kill yourself
t. 1080ti owner

There's no need, you either make more cores or make the cores individually stronger.
Even Nvidia's highest end card tops out at 3900 cores, there's a limit to how much even video games can spread the workload on that many cores.

>limit to how much even video games can spread the workload

No
Also you either make gpu with more coars or you squeeze the highest clocks. Amd do the former nvidia the later. And this is why amd gpus age slower and have higher theoretical compute power.

Vega with gtx1070 performance and gtx 1070 or lower price is all i need and want but i doubt its their highest model

Yes, the Fury X is a good example, it had a shitload of cores but in plenty of cases they were left SITTING IDLE, this is a well known fact you can Google right up.

tl;dr there's either a bottleneck somewhere else, or the command processor on Fury X never managed to find a way to feed all the cores with data at once.

>tl;dr there's either a bottleneck somewhere else
Of course there is, and it's piss easy to guess where.

>primitive shaders

Think rasterization and culling.

>One of the configurations for AMD Radeon RX Vega Specs Leaked In Linux Patch
Here you go sweetie, fixed that up for you.

> Vega 4096 cores = 570 crossfire (2 x 2048 cores) Which is = 1070 performance as leaked 3dmark claim
> Vega 4096 cores = 570 crossfire (2 x 2048 cores) Which is = 1070 performance as leaked 3dmark claim
> Vega 4096 cores = 570 crossfire (2 x 2048 cores) Which is = 1070 performance as leaked 3dmark claim

holy fucking kek. now that's what i call FAKE NEWS

>Vega needs to fix the bottlenecks in Fiji

They already doubled up the rasterizing power, Polaris is as good as Hawaii with half the ROPs, if Vega has that then it will be almost twice the power of Fiji.

According to OP, the Titan XP is 3-4 times faster than the 1060

>If they can deliver a 1070 equivalent for a price/performance ratio that doesn't hover around "fucking awful" it'll be a win for gamers.

Vega will cost $500 minimum due to HBM being a pain to manufacture.

Even HBM1 with 4 stacks isn't all that much more expensive, it matters little on $400+ GPUs

>Titan X (Pascal)
>Titan Xp
which one is it? better release a new Titan...

XP, it has more than three times the cores.

Fuck me HBM makes PCB so much smaller.

...

>Titan X Big Peepee
>Titan X Small Peepee
Is NVidia trying to tell us something here?

No, only that they didn't have the full GP102 ready until a month or two ago.

>Is NVidia trying to tell us something here?
A N O T H E R O N E
N
O
T
H
E
R

O
N
E

>have we started the fire

So why not get a gtx 1070 then

Wtf, amd said vega would do 4k@60hz. Why would they lie to us?

FreeSync? FreeSync.

NOOO

>yes, the fire rises

You are retarded. Amd always releases a range from 125 - 500. This, likely ONE of their weaker cards, is beating the 1070, and will cost at least 20% less than the 1070 which would be in line with amd trend of equivalent tech for a lower price.

hbm2 or gddr6

Card costs $100 more, monitor (vsync vs freesync) costs $100 more, electricity per year costs $50 more.

Because I will save $300 in 2 years if I go vegeta.

HBM2.

>GDDR6

Might as well be another housefire fairytale until March 2018

>Amd always releases a range from 125 - 500

They already released cards from $100 to $300, see Polaris.

Vega cannot be manufactured cheap because of HBM. This is a fact. They will sell it for $500, it will be 1070 level in normal games and normal resolutions, and equal the 1080 only in 4k + Vulkan. It will be the same goddamn thing as Fiji was.

>it will be 1070 level in normal games and normal resolutions, and equal the 1080 only in 4k + Vulkan.
Small Vega aka 1070 competitor? Yeah.

>4096 cores = 570 crossfire (2 x 2048 cores) Which is = 1070 performance
that's not how it work

What's with this
>HBM costs more then decent GDDR5 setup
meme? Even fucking Fiji with 4 stacks was relatively sane and Vega is only two.

still no reason to upgrade from sli 980ti

>stutter

Inb4

>I DON'T SEE IT LALALALALALA

>They will sell it for $500, it will be 1070 level
Seeing as the cheapest 1070 you can get right now is on sale on Amazon for $335, that would be market suicide. Somehow I doubt they would do that, seeing as the rest of their product range is very reasonably priced.

>Even fucking Fiji with 4 stacks was relatively sane and Vega is only two.

its the last talking point of a nvidiot, wait till you see the price dont get fooled

this IS small Vega. Vega 11 is big vega, and Vega 20 will be Vega 10 respin.

>only 64 ROPs

WHAT THE FUCK?

You're making too much ridiculous assumptions, even two Polaris 10 strapped together have more performance and Vega is 15% bigger than two Polaris 10 strapped together.
You're not taking into account clockspeed nor architecture changes.

Vega20 will be FP64 Vega.

>only 64 ROPs

pic related

It's like they hate smaller PCBs and lower power consumption.

That's what L2 cache and Tile based rendering is there for.

L2 caches and tile based rasterizing.

No, Vega10 is big Vega, just like Polaris10 was a big one.

aren't these dx12 games? amd always does well in these games because of the hw scheduler. i wouldn't be surprised if it is on par/slightly slower than the 1070 in most games but pulls ahead by like 10% in dx12 games just like the 1060 vs 480.

DICE uses shitty DX12 wrapper instead of properly writter DX12 rendering pipeline. It makes like no difference.

>The new memory type would replace GDDR5 and GDDR5X by providing much higher data rates (up to 16 Gbps) while operating at a lower voltage

videocardz.com/68948/skhynix-gddr6-for-high-end-graphics-card-in-early-2018

DX12 in BF1 is a joke.
also
>Vega is Fiji : The post


Wow, we really need some kind of entry test for this board, this is almost too retarded to be real

>Wow, we really need some kind of entry test for this board, this is almost too retarded to be real
Captcha that requires you to install gentoo.

I know what it means, but HBM2 has been shipping for a year already and it tops out at 1TB/s, which is still faster than GDDR6, that also uses a shitload of PCB space and more power than HBM2.

tl;dr GDDR for

>>only 64 ROPs

64 [email protected] = 96 GP/s = 96.45x overdraw of colored pixels for UHD@120 fps, plus ~386x overdraw of depth/stencil testing.

Even if you're abusing the pipeline with tons of 1px tessellated triangles, the geometry setup will choke much, much sooner than the ROPs.

Captcha that requires you to write the Captcha software.

Captcha that requires you to write IDE to write captcha software.

>amd, can we have 96 rops?
>64? Whats you need is 32 rops good goy

jew_who_run.jpg :3

Captcha that will require you to write the Operating system that IDE will run on.

Without the comparable to CUDA libraries software support for GPU programming, it has no use for straight people. It can be useful only for gaymers.

But the lack of ROPs was why Fury X was slow

No, it was slop due to shitty geometry engines.

2008 called, they want you to come back.

If the 2x geometry throughput per clock is true, then that means a straight up 3x improvement over Fiji at 1550MHz+

Which is more than enough fight Nvidia's geometry throughput

I wonder what the fuck HBCC is. Looks like purely enterprise thing.

>So Vega 4096 cores = 570 crossfire (2 x 2048 cores)

who is the clever guy that said that? CF is barely 80% scaling, well maybe only in ashes it got to 100% but we know that ashes is not very good way to check performance

The bigger issue is that you can't compare Polaris cores to Vega cores. It's a new architecture. They sticking fuckhuge caches on there and whatever other new features they're doing. They addressed the bottlenecking in Fiji, but you can't even compare Vega to Fiji properly, because they also addressed issues in Fiji where cores would be active even when there was no work.

memory cache?
they demoed MD with 2Gb of RAM with and without cache

basically it's x2 minimum fps and RAM doesn't matter anymore, if we ever live to games needing more than 8Gb

So it turns HBM stacks into fat slices of L3? Man that sounds VERY nice.

no, fury had a ton of problems, and ROP count was probably the most minor.

The biggest was that the memory crossbar was underpowered and couldn't deliver aggregate bandwidth close to the ~500 GB/s offered by the HBM1 (also making color compression in the RBEs/ROPs almost worthless), but it could also easily be choked in the geometry/rasterization setup stage as well.

>nvidia strong with BLACK TEXTURES
>nvidia not strong in random textures

Really makes you diversify

It's probably not true L3 for at least any appreciable size, since all the tags and line state flags would need to go on the GPU and would take an enormous amount of space.

At best it will be something like a transparent page cache (i.e., entry/"line" sizes in the kB range) for read-only/noncoherent art assets, which would be vastly cheaper to implement on silicon as well as addressing the core functionality of what 99%+ of customers really want anyway.

No one can write DX12 at its full potential because microsoft didnt release the SM6 that have everything good that DX12 can do.

That's why Vulkan exists.

Thats why we saw the leap of performance in DOOM. Vulkan have nothing to strain what it is capable of doing, but you need people who know what they're doing.

>you need people who know what they're doing.
And that's not the case for 99% of game developers.

Why not just release a fucking 3DTimeSpy or any gaming benchmark leak instead? I can't imagine how powerful Vega is if it shares the exact same amounts of shaders, texture units and ROPs with the old Fury.

Freesync

So You're trying to tell me that Vega is just a slightly faster Fury X? I have some major doubts here

protip: the fallacious argument relies on the reader believing that multi-GPU scaling is 100% efficient.

It is in some benchmarks, deux ex mankind divided, and sniper elite 4, in sniper 4 it's faster than a 1080

although that's 2 full polaris 10 GPUs naturally

in any case it'd be idiotic to think AMD is releasing a card slower than a 1080 minimum when they know they're a dying company if they don't perform