Vega

With Volta being on the horizon, is Vega pretty much fucked?

>yet another X is dead meme thread
Fuck off back to

What do you mean, The MI25 Card has to deal with the GV100 at its own HBM2 game.

Vega will beat titan in 2 weeks. Volta will beat vega in 2 months. That is how video cards have worked for years.

You think Vega will be dropped on the 31st? or just teased at? My thinking its Teased on the 31st, "Launched" in June then actually released in laaaaate June with actual stocks in mid to late July.

>MI25
paper

Haha thanks :0, Basically gets crushed in all aspects, plus new ones for AI

Yes. Commence the wait for Navi.

This, I love this. Im also torn, Upgrade to the first 2080's or wait for cut down GV102 TI

wait for ti then buy the 2080 so you don't shit on by the launch pricing

t. got a 1080 when they released, few months later they're like 30% cheaper

Prepare for a die shrink of the existing Nvidia products! Nvidia backed themselves into a corner, you watch.

AMD BTFO

HOLD ME GUYS
I CAN'T WAIT FOR NAVI

Navi, 2019

I'm kinda ok with that. The 1080 is a nice balance of efficiency and performance so I wouldn't cry over getting more of the same.

Probably going to buy a Vega card though just to see what they're like (and claim a business expense by shoving it in my work pc for a while)

No, Vega already killed Volta since even 802mm^2 die from NVIDIA can deliver only 30tflops FP16.

These threads live in a bubble where people think that the average person has almost $1000 to drop on a video card. Normal people will pay a lot less for 15-20% less performance than the Nvidia equivalent

never mind that fuckhuge die size and the yields they'll get with it

each one will cost a fortune to make

>With Volta being on the horizon, is Vega pretty much fucked?
Holy shit did you actually miss the Nvidia conference yesterday? Yet you're posting this anyway?
How dumb are you, OP? How can you waste everyone's time posting such misinformed, ignorant drivel. You clearly don't follow the industry enough to question whether Vega is fucked or not.

They revealed that Volta was a complete lie. The performance target for Volta was a lie.
All it is is a refined 16nm process (that they're calling 12nm, even though it's not), and more CUDA cores on the enterprise 15TFLOPs card - which won't be here for another year.
There is no new architecture coming. Nothing.
"Volta" was really Maxwell+Pascal+Volta. Step by step increases. There is no big leap coming. The biggest leap Nvidia will have in the past 5 years and the next 5 years was Pascal.

Wasn't the conference basically about all the extra botnet they're implementing?

>Volta
>on the horizon
It's more than a year away and it's primarily designed for deep learning, not games.

>on the horizon
IF by on the horizon you mean at least 9 months away sure

Normal people don't want to deal with amd shitty drivers too.

>Wasn't the conference basically about all the extra botnet they're implementing?
Mostly. But they also did a paper launch of GV100.
The last time they did a paper launch, the GP100, it took a year to arrive.

If GV100 compared to GP100 is any indication, we're not looking at anything remotely close to the performance improvements from Keplar>Maxwell>Pascal. Likely less than 25%.
They might do better at 1440p and 4K where GCN has crushed them for the money for along time, with a higher CUDA to ROPs ratio and having faster GDDR6 (and maybe HBM2 on the Titan Xv and 1180Ti or whatever they call them).

But still seems like Q2 2018 for Volta just as I fucking expected 6 months ago and haven't changed my stance on since.
Q4 2018 for Navi. AMD would be smart to refresh Vega in Q2 2018.

but vega is going to beat the 1080 ti with an easy 1600 clock

At 1440p and 4K, yeah. But in some Nvidia optimized games, I think it'll still lose at 1080p.
(though anyone getting a card as expensive as either or 1080p is mind-numbingly retarded)

2012 wants its memes back.

>he unironically deals with NVIDIA's garbage drivers
You enjoying that botnet experience with 200 processes and a control panel that is still designed for Windows XP that bricks GPUs every second update?

>volta 800mm2 12nm
>only 30% faster than 600mm2 16nm chip

it's a dud, maxwell was great no question, but pascal and volta are boring

just 15 TFLOPS outside DEEP learning that you have to rewrite everything for and buy the card for $15 000
it's fucking nothing

Huan was selling them on stage, begging for pre order in q4 2018 for teslas, what the fuck
meaning consumer cards will be 2018 q2

That "12nm" is refined 16nm FF+. Just google feature sizes.

>Volta is a node refinement of Pascal
>Pascal is a nodeshrink of Maxwell
>Maxwell is a refresh of Kepler
>Kepler is a cut down Fermi
>Fermi is a Tesla with hardware scheduling
>NVIDIA is STILL reusing the 8800
Jesus fucking christ. They're just as bad as Intel regurgitating the Core architecture every year.

that is even worse, it means they didn't do shit to architecture outside "tensor" which use is yet to be seen

they give real performance gains though

But all they do is increase the core count

volta is literally just moar coars + specialized hardware for google's deep learning algorithm
it's an 815mm^2 house fire with yields about as small as your dick and price tag that makes intel's products look cheap.
not that vega will be much better.
vega just implements rendering and fp16 shit that nvidia has had since maxwell, so it's gonna be on par with volta on that tech.
well they're also adding fp8 ops which might be useful for something
Supposedly should also have higher IPC, but that's probably only in some specific workloads. If I had to guess it's also probably more related to machine learning.

Oh and clockspeed, can't forget that!

Let's not forget that without ATI/AMD we'd all still be using DDR2 on our GPUs

well... intel doesn't.

GCN was competing with Maxwell without that breakthrough, can't imagine Vega being bad.
But hype is high, it's going to be retarded arguments "it's can't beat 1080ti even though it's withing 7%" like with ryzen nobody would notice the price tag $150-200 less than 1080ti

>specialized hardware for google's deep learning algorithm
Google uses ASICs for it. Irony, huh.

To be fair the Fury X had the same RRP as the 980 Ti and I don't see AMD going any lower than that, so at best Vega will undercut the 1080 Ti by only $50. Need to consider the cost of HBM2 and the larger die.

press conference on 31st.
16 cores incoming(rumors of ES started floating around), and vega launch

Vega's HBM2 setup is probably cheaper than 11gigs of GDDR5x with traces. But the die is bigger.

Their entire marketing gimmick is that you have the flexibility of a GPU with the Tesla that you won't get with an ASIC.

So they basically crammed an ASIC in the silicon and cranked up the die size to accomodate.

Not when you factor in the scarcity of HBM2 currently.

I want to see those yield numbers.

Their ogl driver is still single threaded and broken.

is it? samsung making HBM too now it's not one player anymore

Google's TensorFlow ASIC is as big as HDD so that 'flexibility' is a meme.
Wut? If you mean Samsung HBM2, then yes, but Hynix can produce fuckton of their HBM2.
>ogl
>in 2015+2
Why? Use Vulkan nigger, even fucking Khronos Group says that.

Whoa, 1.7% is back, baby. Rev up those memes.

sure but we have so little specifics about vega, it's hard to say where it will land overall.
A lot will depend on final clock speeds too.

>sure but we have so little specifics about vega
They literally said everything about arch features.

>vulkan
Vulkan can't replace ogl. I emulate and amd is no buy for me until they fix their shit, but looking in their last 15 years they won't be getting my money soon.

Vulkan can and will kill OGL as soon as you replace H1B pajeets with proper fucking programmers.

we actually know a lot, it's just hard to transfer into fps numbers for now

but look, if simple furyx at 1Ghz can be on par with 1070 just bumping the clocks might do it

Nobody cares about you stealling consoles games.

it was fucked by default. nobody buys high end amd gpu. just look at how much the fury series crashed and burned. also it's out way too late. many people, including me, bought 1070's months ago. iirc the 1070 is in the top 5 most used gpu on steam.

>steal
Kill yourself
>>>/plebbit/

>800mm^2 chip
>12 nm

We wont see that for at least a year.

not really
they gave us a high level view about it, but not how it translates to real world performance and what kind of optimizations (if any) are needed

>if simple furyx at 1Ghz can be on par with 1070 just bumping the clocks might do it
well I imagine furyx would be better than 1080 if it had as many rasterizers as vega (11 triangles per clock vs 4 on fiji) and more schedulers since those seem to be the main bottleneck.

>what kind of optimizations (if any) are needed
They said something about gamedevs making use of primitive shaders directly instead of driver wrapper translating vertex shader into primive one. That's it. We'll se on Computex.

It's "on the horizon" just like Navi is, Hynix already said volume production of GDDR6 will be in early 2018, that's where Volta for consumers sits.

You might get another 750ti a few months earlier but nobody cares about $120 crap

Volta is simply bigger Pascal. It's now a new arch. Also when novideo will start using HBM2 in consumer cards?

Once it becomes cheap enough for them to slap massive margins on it.

Right now an HBM2 Titan X would probably retail for $2000 to feed their greed

I dont think 11 gigs of GDDR5x is much cheaper than 2 stacks of HBM2 and interposer.

Consider the yields, if a chip is bust all they can do is trash the HBM2 along with it.

Hence why the Radeon Pro Duos come with a full, dead chip in the box (including HBM dies)

It is a new arch, but not a new arch where we care, like increasing IPC, clocks and general performance.

The fact that Linus is on that picture shows how retarded AMD fanboys are.

That guy has alot more AMD sponsored video's on his channel than Nvidia ones

Pro Duo's are 8 stacks with two interposers. Two stacks on one (also smaller) interposer are much cheaper. Basically novideo are greedy laze kikes to adopt something new.
It's as 'new' as Kepler Refresh.

He shilled Intel and NVIDIA for free when that image was made

You mean he reviewed them, and now he's actualy shilling video's for AMD (getting paid for video's of their products)

I love how Intel and NVIDIA have managed to change the meaning of what an architecture is like this

in case you couldn't tell from the gtx 480, that's a really old image

>disclaiming a sponsorship is now shilling
Kid..

Yeah it's not like AMD has marketed huge architectural improvements for their last 3 gpu generations, while they only turned out to be very small improvements on the same GCN architecture

1.7

You need to remember that AMD has much less R&D money than NVIDIA and especially Intel.

>it's fine to call him an Nvidia shill based on nothing, but calling him a shill on actual sponsored video's by a company is bad!

Architectural improvements are accurate, they aren't claiming it to be a new architecture.

Why do you think Polaris is still called GCN? Why do you think Terascale 2 was a thing? Because they aren't trying to pretend they are new architectures like their competitors do.

>was literally trying to hide the 'sponsored by Intel' disclaimer on his older videos before he got called out on it

>Because they aren't trying to pretend they are new architectures like their competitors do.

Really kid, are you this naive?

Look at AMD's marketing slides compared to Nvidia's. They are just as bad when it comes to marketing. AMD even compared CF cards to single ones, and showed performance gains compared to cards of 3 generations older.

They are both share holder driven companies that value profit over everything else, if you think AMD is some sort of good guy charity company you are mistaken

>AMD even compared CF cards to single ones
Oh yeah because a $400 CF config is unfair to compare to a $600 single GPU.

they will defend this

>CF config is unfair to compare to a $600 single GPU
It is, because CF configs come with a whole whost of problems, and don't work well in many titles

>and showed performance gains compared to cards of 3 generations older.
[citation needed]

But it increased performance by 40% while using the same power(assuming Nvidia isn't fucking with TDP), that's worthy calling a new arch, but every generation regardless of being a new arch or not does exactly that and we can't call every generation of GPUs a new arch.

>if you think AMD is some sort of good guy charity company you are mistaken
Compared to our favourite 3.5 green jews and TIM blue jews they are truly the charity of silicon industry.

Worry not, CF isn't quite as garbage as SLI

Yeah..no

>But it increased performance by 40% while using the same power
ASICs are low power, and the performance increase is in THEORETICAL compute performance.

12nm FFN apparently has 20% power savings so I'm far less inclined calling it a new arch.

It's a bigger fucking die and perfomance increase corresponds to die size increase. And no, GV100 is 300w TDP.

Except it's true, SLI is still as garbage as it was in 2005 and Crossfire has always been superior.

Yeah that doesn't sound like a unsubstantiated biased fanboy claim at all

>SLI still can't work over PCI lanes
>SLI has inferior scaling

So substantiate your claims with evidence

Lets see those charts

Is anyone impressed how AMD has managed to reign in leaks of Vega when they've already started printing retail boxes?

Besides the leaks of the 6-7 month old engineering samples, there's been NOTHING about it.
Leaks of Polaris and Zen have been numerous.

Can you just stop posting already you tech illiterate newfag NVIDIA fanboy

King Pajeet and his Kali cultists are killing everyone trying to leak info about Vega.

We have leaks of the :C3 ES which is from February, that's newer than the :C1 initial leaks we've saw in timyspy/firestrike which I'm assuming are the oldest of chips

Let's go old school