When's the magical driver update that fixes everything supposed to arrive again?

When's the magical driver update that fixes everything supposed to arrive again?

Other urls found in this thread:

devblogs.nvidia.com/parallelforall/inside-volta/
twitter.com/SFWRedditImages

Maybe six months, maybe a year, maybe when Navi launches. Maybe when I fucking feel like posting the drivers on github just to fuck with people.

Whenever 17.320 hits the public drivers.

RTG is one meme shy of being erased from the timline... They better deliver on what they promised and I'm not talking about AMD proper releasing a proper APU. I'm speaking about their full fledged GPUs and their inability time and time again to publish functional drivers and dev tools yet promising the world above their competition.

I'm still waiting for nvidia async drivers -=\

There is no such thing, but I understand what you're meme is
Still irrelevant bud

I want to see a pygmy fuck a hippo too, but some things just aren't possible.

Nvidia use the brute force

Nvidia had been using Async since the GTX 480 dude.
480 and 580 had partial hardware schedulers
680-1080 have software based Async.

they're waiting for 1070ti

>680-1080 have software based Async.
wrong

>Nvidia had been using Async since the GTX 480 dude.
No they haven't. It's not even a level of control exposed for DX11.

They don't need it. The reason why AMD sees "large" improvements with async compute (when done right and for a sufficient workload anyway) is because AMD is bad at distributing work to the shaders. In AMD's case the reduced latency between tasks means shaders get more work. Let's say shaders are working on some shit, and with async compute they get issued more work quickly because the memory ops and graphics ops were scheduled and worked on quicker. More work is issued to shaders while they're still working, in theory it sounds trivial at best, bad at worst because you assume shaders are given work in such a way that you get 100% resource efficiency. But in AMD's case the shaders are normally underfed, which means getting more work to do is a good thing. Of course, things don't always work out perfectly and sometimes async compute doesn't do much, and sometimes it makes a big difference. Of course, the biggest difference can be made with fixed hardware, and ultra-low level control, and predictability (console games, like The Tomorrow Children).

AMD's shaders right now are like the genius expert who's getting paid 6-figures to sit on his ass all day because everyone else is too slow at their own work and bad at giving him something to do.

they added dx12 support to kepler, two years later

9/23/17

lol, this fucking shill again
same picture name in every single thread

not going to dismiss this dispite these shill insisting we give up on vega.

I know its shit and always going to be based on power draw a lone.

I'm ignoring shills that talked about 1080 ti or vega together at anytime.
because only a idoit would think 8GB GPU can compete with 11GB gpu.

So what I think is that Vega 56 is hugely successful regardess what Sup Forums says.

but Vega 64 is a flop the power draw and the lack of refinement and Improvement isn't there we don't care if it doesn't beat the 1080ti but its not close and at the target where the tech communtiy expected.

But because this is a New Uarch i'll give it untill end of December to fix it.

Its good though that at least vega 56 did compete because now we are forcing nvidia to release the 1070 ti.

I hope Navi recovers what vega 64 lost.

t.GTX1070 owner.

You're a fucking retard. How many times does the same thing need to be explained? AMD sucks at DX11. Nvidia not gaining a performance percentage in DX12 doesn't mean NVIDIA performance in DX12 is bad.

devblogs.nvidia.com/parallelforall/inside-volta/

>Volta GV100 is the first GPU to support independent thread scheduling, which enables finer-grain synchronization and cooperation between parallel threads in a program.

THANK YOU BASED NVIDIA

can anyone explain this to me like I'm 12

No deadlocks.

>implying

What is this image trying to convey

Drivers which fix everything will arrive with NAVI

No deadlocks.
Threads in one warp can have independent scheduling.
>posting the official whitepaper is shilling
?

>Threads in one warp can have independent scheduling.
isn't it whole point of GCN?

does AMD really not have something comparable?

No, GCN allows interleaving of graphical and compute slots in one wavefront without GPU shitting itself causing fuckton of cache flushes.
Not right now.
But for them the change is trivial.

so, is it same deal as with AMD side? have to specifically optimize for such scheduler?

Yes, it's CUDA9 thing and it's irrelevant for gayms.

It's what, October for AiB cards?
That's when you'll have your drivers.

I want it to be good so bad, and I don't even own a GPU made in the last couple of years. Nvidia needs to be taken down a notch.

>Nvidia needs to be taken down a notch.
RV770 failed to do so.
Evergreen family failed to do so.
Tahiti failed to do so.
Hawaii failed to do so.
It's a mindshare-driven market and Radeon lost it during R600 times.

I suppose I know Nvidia will never be taken down a notch, but people in the know knew AMD has been the better choice at times. I just want Nvidia to at least stop their milking and jewing to an extent.

>I suppose I know Nvidia will never be taken down a notch
No, it will, but not in the consumer market.

I'm an AMD fanboy, but I don't get Vega nor the "poor volta" marketing shit.

I mean, wtf, nvidia just responded with numbers, one year in advance!

I don't think there will be any "magical driver" for any not yet active features nobody cares about.

and what's worse is that this shit draws a fuckload of watts, seriously this is a fucking joke. It's like the bulldozer of gpus.

and I've always bought AMD and ATI, this is just disappointing.

and not to mention the mentally insane pricing policy, oh boy...

>features nobody cares about.
>no one cares about generalization of non-compute shader
t. never read documentation for any relevant API

AHAHAHAHAHAHAHAHA

>it is so relevant they didn't enabled it

so they are just monkeys throwing shit at themselves, makes sense since pajeet raja and his pajeet team.

what was I expecting from someone from a country like India, they bath in their own shit, baka

AMD fags BTFO

t. retard

>because only a idoit would think 8GB GPU can compete with 11GB gpu.
Only an idiot would think video memory is any way related to raw GPU performance

Well they fired rajesh poonjab so who knows when now

They did what?

keep up with news my friend

There's nothing about AMD doing any kind of layoffs.

"Hey! look! 128MB of ram! that's like TWICE what you find on a Geforce 4ti! i bet this card is a monster!"

tfw fell for the r9 390x 8gb vram meme

>Suggesting people discussing Vega on Sup Forums should actually read the white paper for the product they are mindlessly shitposting about is shilling
The absolute STATE of neo-Sup Forums

>NV30
How the fuck did they manage to make FP32 shaders THIS slow?

still a decent card so dont understand how it would be a meme

At least games are very playable on it.
FX5200 was made out of pure disappointment.
Was like getting a Geforce4 that can boot more games but run em worse, because Nvidia crippled the T&L capabilities due the 4MX being fast enough in T&L to do CAD as well as the expensive quadro cards.

never, even Raja gave up.

390x is mazing my friend, especially since it can get double its value as heater and winter is coming. If your room is small enough, your PC can practically heat it to a cozy temp by istelf.

cos it runs worse than an equivalently costing card, atleast in Australia

also the card grinds to a halt if you try to use anything that approaches 4GB vram or more so what's the point?

Skimming over the internet, it seems that the case is that the Geforce FX only have a 16bit ALU, and when you try to use 32bit, it takes two cycles to perform it.

>Raja gave up
Dunno "primitive shaders" are strictly his input.

>release a GPU that performs the same as a Fury X clock for clock a year later than the competition
Why do people here shill for ATI/RTG when they can't even release a finished product after a year delay? Vega 64 would be a shoo in for worst product of the year if Intel hadn't released Kaby Lake X. Is it really just muh underdog?

>Is it really just muh underdog?

it's about competition, people don't like being gouged by monopolies

It's only slower than Fiji clock for clock because of 4tris/clock limit imposed by legacy pipeline.
And you know it yet still shitpost.

AMD bet the farm on Zen. That left RTG with minimal resources to develop Vega, and so they don't have enough driver monkeys to get the drivers done in anything resembling a timely fashion.

Vega behaves like an OC'd Fury right now because it's still subject to GCN's 4 triangle per clock front end geometry bottleneck. This is because Vega launched without they key uarch feature designed to overcome this limitation enabled yet. If primitive shaders can deliver even half of the claimed benefits, Vega's geometry performance is going to literally double when prim shaders hit the public drivers.

discuss what? that shill has posted the same pic a hundred times now, anything that there was to discuss has already been discussed

until the super duper mega giga epic performance uplifting drivers come out, I will keep calling vega SHIT

S H I T
H
I
T

>That left RTG with minimal resources to develop Vega
Why does this bogus keep getting parroted

>posting a page from whitepaper is shilling
?

>That left RTG with minimal resources to develop Vega
What is this meme?
The changes Vega brings are too hueg for "minimal resources".

>whitepapers increase gpu performance
?

I dont know. The most problematic thing about VEGA is that they couldn't make seperate lines for compute and graphics cards. Which would have been impossible with any kind of money as a purely graphics card wouldn't sell as well without an Nvidia sticker.

It's obvious how understaffed RTG is for driver monkeys or they wouldn't have launched Vega FE with pre-alpha tier drivers and Vega RX with half finished drivers missing core uarch features.

>The most problematic thing about VEGA is that they couldn't make seperate lines for compute and graphics cards.
They did exactly that.
Vega20 is their HPC offering with 1/2 FP64 and ECC in LDS/GDS/caches.

when have they not been understaffed and underdelivered when it comes to release drivers

They will.
Now steal me 17.320 branch.

>with pre-alpha tier drivers
>he actually believes that a magical driver update will come and obliterate nvidia
lol, see you in 2 years, when vega manages to beat a 1080 by 5% because the gpu is still shit

hopefully someone who can actually program applies to work there

we'll just have to wait

The real question is, when is GPU mining going to be not worth it

...

>better performance in internal branches means it will never perform better in public ones
?

do you do it for free?

>reeeeeeeeeeee its shit because i say so! shill!

>draws more power than a fury x
>1080 tier performance

yeah it's definitely not shit lol

>Vega performance improves two years later after everyone has already moved on
>Vega beats 1080 but no one cares except AdoredTV
>AMD fanboys start screaming FineWine in Youtube comments section
>Nvidia continues to dominate in marketshare

>Nvidia continues to dominate in marketshare
That was still the case during Fermi so it's a moot point.

Read the white paper. Core uarch features well described in the white paper are not "magic."

>dude it doesn't matter if AMD fucks up the launch of a really hyped up GPU lmao

I don't remember AMD fucking up the launch of SSG.
Do you?

>moving the goalposts this hard
hahaha fuck, you're adorable
dude, it's ok to admit that vega is shit, I own an AMD cpu + gpu combo

>really hyped up GPU
Did anyone hype up RX Vega?
Stop moving the goalposts.

I don't see why people should support a company with an incompetent driver team just because of competition. If anything this just proves that we need a third GPU company, but of course this will never happen due to the patent nightmare.

>reddit spacing

>Did anyone hype up RX Vega?
umm yes, AMD themselves did
don't tell me you don't remember "poor volta"

you know what, you're a shill, that much is obvious, so you go on and shill some more on Sup Forums, I'll stop wasting my time on you

>Did anyone hype up RX Vega?

Support is one thing, but hoping a company does well is another

Yes actually, Sup Forums has been telling people to wait for RX Vega for over a year. Now they're already saying wait for Navi.

>Two years
The current public driver is based on the 17.300 branch. Primitive drivers are working in the 17.320 branch. That's not 2 years away. More like 2 months tops.

>don't tell me you don't remember "poor volta"
How is this in any way related to RX Vega my child?

Current public driver is 17.20 at best.
DSBR is *supposedly* working.

You wouldn't be able to see any benefits from DSBR until after the geometry bottleneck is fixed anyway.

>Did anyone hype up RX Vega?
are you for fucking real, dude?

Not completely true.

>Did anyone hype up RX Vega?

This is the face of the AMDrone.

I dunno you can load something shader heavy and hammer the GPU for a few minutes.
Too bad no one bothered with it just like no one besides some germans ever bothered with B3D suite.
Modern """"""""""""""journalism"""""""""""""" my ass.

Vega is shit. Efficiency is not bad but Rajaeet had to clock it up to 1500-1600Mhz in order to compete. For Mobile offerings vega graphics and their full support of the DX12 feature set is certainly nice. I just hope they can get die size under control.

>Efficiency is not bad
"Efficiency" meme is straight up relative to ALU count.
>For Mobile offerings vega graphics and their full support of the DX12 feature set is certainly nice.
What the fuck does this even mean?
>I just hope they can get die size under control.
Vega10 is only 486mm^2 for 4k ALU design.