>make a shitty architecture with garbage perf/W >don't bother to optimise for existing API (DX11) >sell Sony and Microsoft chips based on your shitty architecture for almost non-existant profit >make your own proprietary API to get decent performance out of your shitty architecture >suddenly, Microsoft imitates your proprietary API, an API suited for GCN, barely discussing it with competitors >enjoy lead in games until the next DirectX version due to an API suited for your cards only
AMD for you guys, fucking brilliant tactics they employed I gotta say.
Juan Harris
>enjoy a slightly better price/performance in the mid range gfx card department until the next DirectX version due to an API suited for your cards only. fixed that for you
Julian Rodriguez
POO IN GPU
Eli Nelson
>make your own proprietary API to get decent performance out of your shitty architecture Huh? I'm fairly sure Mantle is very much open and not proprietary. Same go for DX12 and Vulkan
Also, it's not the shitty architecture that Mantle is designed to work around, it's the shitty drivers.
>suddenly, Microsoft imitates your proprietary API, an API suited for GCN, barely discussing it with competitors It's not “suited for GCN” any more than it's suited for any other type of hardware. Rather, the inverse is true - traditional APIs (DirectX and OpenGL) are *not* suited for GCN.
AMD's losses were always due to bad driver overhead, the hardware is computationally vastly superior to nvidia - the problem is that for VIDYA GAEMS raw power isn't the only thing that matters, and most of it was being bottlenecked by driver stalling and othe rissues.
Mantle solves these issues.
>enjoy lead in games until the next DirectX version due to an API suited for your cards only That's ridiculous. Vulkan are also better for nvidia cards. It's just that the gains aren't as big because nvidia's architecture was already very well-optimized towards OpenGL and DirectX, and their driver team is significantly better, so the never experienced the high overhead losses that AMD suffered from.
Vulkan essentially just exposes the hardware's performance directly, mostly eliminating driver overhead/inefficiency from the equation. Nvidia just had more efficient drivers to begin with. AMD responded by making a new API for which it's easier to write drivers, thus exposing the true power of their hardware.
I see nothing anti-competitive going on here, only failure to compete.
Isaiah Ramirez
AMD is so fucking retarded. Can't wait until they die finally.
Noah Allen
>Sup Forums technology
Carter Perry
>Sup Forums paid nvidia shills
Bentley Thomas
that's epic and all but I've been wondering why AMD seem to be so far up microsoft's ass lately? They seem to have gimped the ps4Neo in favour of the new xbox for no discernable reason, and after donating mantle to kronos they seem to be going miles out of their way to do everything they can to promote dx12 over vulkan. it's really fucking weird.
Kevin Gutierrez
>Huh? I'm fairly sure Mantle is very much open and not proprietary. Same go for DX12 and Vulkan
Mantle never ran on Nvidia.
>Also, it's not the shitty architecture that Mantle is designed to work around, it's the shitty drivers.
Because AMD didn't bother to optimise their DX11 driver.
>It's not “suited for GCN” any more than it's suited for any other type of hardware. Rather, the inverse is true - traditional APIs (DirectX and OpenGL) are *not* suited for GCN.
Shitty architecture in that case then. AMD got lucky that MS basically copied their proprietary API. What if they hadn't?
>AMD's losses were always due to bad driver overhead, the hardware is computationally vastly superior to nvidia - the problem is that for VIDYA GAEMS raw power isn't the only thing that matters, and most of it was being bottlenecked by driver stalling and othe rissues.
Again, why not fix DX11? Oh right, because the architecture is shit and it can't be fixed.
>I see nothing anti-competitive going on here, only failure to compete.
I don't either, just bringing out their tactics and how they got lucky.
Chase Johnson
DELET
Colton Perry
>Mantle never ran on Nvidia. Because nvidia didn't want to write mantle drivers. That doesn't mean the API is proprietary. Do you know the definition of proprietary?
>Created or manufactured exclusively by the owner of intellectual property rights, as with a patent or trade secret. AMD never had any patent claims on mantle. In fact, they released the spec and encouraged nvidia to also write mantle drivers.
So it's not proprietary.
>Because AMD didn't bother to optimise their DX11 driver. “Driver optimization” is really a misleading word. I'll tell you what “driver optimization” looks like:
switch (game_being_played): case "game A": case "game B": case "game C": ... case "game Z": shader = load_replacement(game_being_played); if (game_being_played break;
You read that right. Every “driver optimization” is essentially just replacing the shader used in video games by optimized versions. Turns out, nvidia isn't as good as designing hardware as they are at taking game developers' shaders and rewriting them to be optimized for their hardware instead.
As an analogy, imagine if Intel CPUs detected the program you were running on them at runtime and replaced them by versions optimized towards intel (e.g. as produced by the icc instead of the gcc).
Also, just like icc-produced binaries are known to run significantly worse on AMD hardware than gcc-produced binaries, so too do shaders optimized for nvidia hardware run significantly worse on AMD hardware. Comparing nvidia vs AMD in GameWorks games is pretty much the equivalent of using a binary produced by ‘icc’ to benchmark Intel vs AMD CPUs: In other words, complete and utter bullshit.
If you want to test GPU performance objectively, you have to use benchmarks that aren't designed to exploit one competitor's hardware more efficiently than others' - and then also use a benchmark that the drivers don't have special-cased bypasses for.
Brayden Lewis
>and then also use a benchmark that the drivers don't have special-cased bypasses for. And it turns out that if you do this, AMD beats Nvidia in the price/performance department by a gigantic margin. The difference is almost 200% in raw tests for e.g. compute power (see LINPACK) or texturing throughput (see e.g. mpv).
tl;dr the entire industry is pretty much stacked against AMD, and Vulkan is their best way out right of this conspiracy.
Hudson Carter
>Mantle never ran on Nvidia.
lol
Nope it never did, but I can tell you don't even know why it never did. Educate your self on these topics before you post stupid shit like "make your own proprietary API"
Nathaniel Lee
>Mantle never ran on Nvidia. because nvidia never implemented it. how the fuck is that AMD's fault you dumb twat
>Because AMD didn't bother to optimise their DX11 driver. this is true. however they appear to be rewriting their driver stack from the ground up. could be that the old code was just too shitty to work with.
>Shitty architecture in that case then. AMD got lucky that MS basically copied their proprietary API. What if they hadn't? the shitty architecture is the one that gets destroyed when you use modern engines and modern specs, which in this case would be Pascal
from a pure marketing standpoint it does work to employ near-sighted tactics that rely on rendering 2 year old games faster, because that's all the leddit kids who go on Sup Forums for kicks look at
the only AMD "tactic" you need to care about is the one that will phase this out in favor of modernity and superior standards, rather than focusing on making as much money off of men-children as possible
Blake Perez
There are a lot of GPU threads alive, pick one and keep your cancer in there.
Ryder Hill
when do you get your whistle and hall monitor sash?
Bentley Brooks
I would pay 20 San Francisco strippers to suck your dick for dropping truth bombs on a bunch of gaymers who don't understand anything but marketing.
Caleb Sanchez
Examples:
In the $330-ish price class (last gen) we had the GTX 970 and AMD R9 390.
Really, if you know how to optimize your shaders for your GPU you would be absolutely mental *not* to buy an AMD GPU.
The only thing nvidia has going for it is lots and lots of muscle in the game development department to sneak in their biased shaders and/or overrides - they're comparatively rubbish at designing hardware.
Elijah James
Too bad they will keep flooding Sup Forums with their shit anyways...
Nathaniel Adams
Cheers.
Justin Thompson
I heard they offered cooperation but nvidia turned them down? whatever I don't even know what you're really talking about haha
Brody Nguyen
b-b-b-but techpowerup said it gets more FPS in battlefield 3
Benjamin Bennett
>shitty proprietary API >that improves performance on both your own, and competitors cards
FTFY
Elijah Barnes
Their only mistake is they never actually fixed their shitty drivers.
Joshua White
amd gave all their mantle code to khronos so it still lives on in vulkan.
Kayden Cook
Nvidia, fastest in DX12 and will run FL12_1 games
Poolaris doesn't support FL12_1 and will not run any future games, 0 FPS
Enjoy your SM2.0 cards in SM3.0 games all over again aka doesn't run at all
Easton Ortiz
lol at the same time you talk about drivers using shader replacement(which is only a small aspect of driver optimization compared to compiler improvements in general) then you act like amd can't do the same thing for gameworks games
Charles Diaz
Enjoy your no async nvidiot
Ian Nguyen
Pure specs =/= architecture.
Hudson Sanchez
GTX 1080 and the new Titan are still better than anything AMD ever made, in DX12/Vulkan too.
Dylan Nelson
like if a game is made only of shaders
Juan Baker
>Muh $1200 vidya card
You do realize 99.999999% of people will never own either of those overpriced monsters, right? There's a reason eSports games are so popular, and it's not because you need a fucking Titan X to run them at decent FPS.
Nathan Cox
FL12_1
What is that?
Landon Young
everything is rendered with shaders
shaders and resource management.
David Cooper
>Underhanded tactics are okay because the competitor can apply them too
Jonathan Jones
>implying a 1080 is $1200
Lincoln Ortiz
shader replacement is an underhanded tactic?
you seem confused
did you mean "underhanded tactics are ok because the competitor can work around them?"
William Price
Modern GPUs have essentially no fixed-function capabilities. Everything is done through shader invocations, including even generating geometry and textures in modern games. (see: tessellation, geometry shaders, dynamic texturing, etc.)
Eli Lewis
>Underhanded tactics are okay if used to work around underhanded tactics
Ryan Green
what is wrong with shader replacement?
Adam Hernandez
Something that Nvidia has less support for than Intel does with their skylake igpu.
Caleb Phillips
nothing so long as they are equal in visual quality
Ayden Wright
Because Nvidia saw no point to spend time and money writing drivers for an API made for their competitors architecture?
Caleb Gomez
>the shitty architecture is the one that gets destroyed when you use modern engines and modern specs, which in this case would be Pascal
Oh really, in which games does Pascal get destroyed?
Liam Harris
it's strange to see so many AMD fanboys crying over gimpworks when AMD was trying to do the same gimping bullshit with tressfx (and now '''gpuopen''') literally years ago in 2011-2013. even to this day tomb raider and crysis 3 still gimp the fuck out of nvidia gpus.
Michael Richardson
He is sort of right.
DX11 does allow for "automatic async" if the driver is smart enough. I'm not sure about the multithreading part. Maybe he means that nvidia's method of multithreading their drivers doesn't work in dx12? I don't know.
Nicholas Gray
Its even more amusing to watch Sup Forums violently deny the implications of DX12 and vulkan despite the mounting evidence that maybe Nvidia isn't so hot at it.
Liam Johnson
is this b8?
Christian Roberts
It's an underhanded tactic to use them for benchmark targets or programs, because it's misrepresenting the capabilities of your GPU.
It's impossible to replace every shader out there, so many games and applications will be unaffected by the “optimizations”.
If you're using games with “optimized replacement shaders” as the source of your benchmarks, then those benchmarks are biased. Similarly, if you're optimizing specifically for games and programs that you know benchmark websites use, you're making yourself seem better than you are at the cost of making actual real-world usage suffer.
Jack Johnson
Oh great! Another paid shill thread. Are there no mifs here or what?
Isaac Martin
wut? all i've seen is AMD shills spouting off about things they don't know about. currently NVIDIA is outperforming AMD across the board in every DX12 game (except for the gimped amd-sponsored AotS game)
Jaxson Taylor
>Underhanded tactics are okay if the competitor is using them too
So many fallacies ITT
Aaron Perry
>at the cost of making actual real-world usage suffer
How does making one program run faster make another slower?
Joseph Perez
Oh great! Another paid shill thread. Are there no mods here or what?
Noah Collins
then AMD should start by practicing what they preach and stop trying to gimp NVIDIA's performance.
Landon Garcia
>make a shitty architecture with garbage perf/W >don't bother to optimise for existing API (DX12/Vulcan) >sell Sony chips based on your shitty architecture for almost non-existant profit >make your own proprietary effects to get decent performance out of your shitty architecture (Nvidia Gameworks) >enjoy lead in DirectX 11 games version due to an API suited for your cards and suck ass preformace in DX12
Jayden Reed
>He is sort of right.
No, he is not. EVERYTHING in that picture is complete bullshit. Automatic async is straight up gibberish.
> Maybe he means that nvidia's method of multithreading their drivers doesn't work in dx12? I don't know.
For Nvidia aboslutely nothing changes - its that GCN is constantly hammering thread 0 for work and that tends to bring cpus down as modern cpus aren't really fast enough to keep up with top end gpus when only feeding said gpu commands over a single thread (as remember the cpu itself has shit to work on). Nvidia has managed to get their driver multi-threaded under DX11 (iirc from the so-called wonder driver for kepler) but AMD never managed it. Now under DX12/vulkan the gpu can keep requesting work from multiple threads so as to not cripple the cpu.
>stupidity
AMD by fare has the advantage in DX12 titles - its basically only tomb raider where AMD doesn't murder its equivalent Nvidia card.
Oliver Smith
>AMD by fare has the advantage in DX12 titles - its basically only tomb raider where AMD doesn't murder its equivalent Nvidia card.
[citation needed]
the 1060 is currently outperforming even overclocked 480's across the board, even in gimped amd sponsored games.
Cameron Ross
>make a shitty architecture with garbage perf/W ?
Ryder Cook
Yea I just wanted to see if he knew what FL12_1 was. He just seems to be another fuck who seen that AMD does not support it and is just shit talking with no knowledge.
Jaxon Carter
>>make a shitty architecture with garbage perf/W
AMD can't even make an architecture that beats NVIDIA's 28nm perf/w
Aiden Rogers
Havent read so much bullshit for a long time, great work.
Sebastian Martin
No, in DX11 all dependencies are tracked. This means that even if the application doesn't explicitly say "this is safe to execute in parallel"(which it can't in DX11), the driver is able to determine if it's safe.
Jack King
All Nv architecture is shit. No ACE No Scheduler. When they puts a Scheduler back in, watch the power draw go up and house fire meme come back into effect. Don't you know thats why they stripped it away in the first place?!?!
Jordan Hernandez
You know it's true though, AMD got really lucky. What if Nvidia had decided that they can get more performance from their cards than with DX11 and developed their own API which Microsoft later imitated?
Landon Adams
>No Scheduler
You can just say "I'm retarded" next time.
Robert Jenkins
They did not gimp the neo. That was all on Sony for under designing the apu soc chipset. Sony had options, but they refused to use them.
Angel Lee
The 1060 and 480 aren't the only gpu's AMD and Nvidia make. Plus the reference 480 has roughly a 12% lead over the reference 1060 in DX12.
Robert Johnson
Hitman runs 20% better on AMD than Nvidia and 1060 still has better perf/W.
Justin Diaz
Serious question!!!
What happened to G-Sync? They had like 5 monitors for 2016
Xavier Cox
Monitor manufacturers outside of asus don't want to pay the Nvidia tax.
Chase Cooper
Shit's fucking expensive due to Nvidia's licensing fees and no one bought it Like $700 minimum for a damn monitor, and Sup Forums thinks Nvidia wouldn't be a bunch of price gouging Jews if AMD was gone. They'd make the price of a good GPU equivalent to buying a new car if they could.
Angel Rodriguez
AMD shillbot
Adam Lopez
> the hardware is computationally vastly superior to nvidia
and that's before taking into account AMD's shitty optimization
Kevin Carter
Nvidya outjewed themselves.
Brandon Diaz
...
Camden Butler
and poor utilization mainly due to much lower geometry performance
Nathan Perez
it's easy to cherry pick benches that look even worse for amd
Logan Miller
Cause Nv has less modules to power up on their cards. When AMD makes a new arch they keep some of the old modules and add on new modules. When Nv makes a new arch they have all new modules. This means Nv has less power draw and can be more efficient. For AMD since GCN 2 all the way to GNC 4 has some of the same modules it can be supported for a long time period. Thats why AMDs old cards run better still. Thats why AMDs 7 series can support the new Vulkan API. Nv and AMD have different philosophy in gpu design.
Ayden Garcia
Fury nano, 28nm 8.2 gflops, 175w Superior to 1070. Stop posting kid.
Carter Wright
Round and round we go.
Stop posting.
Gavin Green
no one can play a bench. they are useless way to rate a card.
David Robinson
By the emperor you are right.
Jaxon Jones
More like 7.2 TFLOPS at 185W, meanwhile the 1070 does 6.9 TFLOPS (because the actual boost clock is higher) at 145W.
Lincoln Jackson
>Nightmare Quality But that's the difficulty.
Jace Rodriguez
As well as the top preset for graphics settings. In fact you can't even select it on 4gb cards without a workaround. Interestingly it does show that 4gb isn't holding the fury x back.
Joshua Ross
While being 16nm vs 28nm.
Aaron Walker
no they would drop the prices down A LOT so new companys have no chance to succed
Camden Foster
Don't forget the R9 Nano was made of the best binned Fiji chips.
Luke Allen
What the fuck, I don't think ive seen that option and I have a 390x. Is it available on Vulkan?
Liam Howard
With the amount of Jewing they did with Gsync and the whole Founders Edition bullshit nah
Evan Nguyen
That changes nothing - the nano released in that state.
Its available on both apis.
Mason Hall
TressFX was open source and so is GPUopen If you can't understand why that matters then you're a fucking retard.
Bentley Flores
>TressFX was open source and so is GPUopen
open to read, not open to modify and redistribute, which is exactly the same as the gimpworks SDK.
Joseph Wilson
it doesn't really. source code doesn't show bottlenecks. much better to run the application through a debugger/profiler
Nathaniel Ramirez
It changes a lot, or do you think perf/W decreased with 480 over 28nm chips?
Leo Williams
Their arch still has shit clock to power scaling. 7% overclcok for 18% extra power usage. Best binned or not. You'd probably see near nano power usage if you undervolted and underclocked fury x to match it.
Adrian Cox
>open to read which is why Nvidia was able to fix their tressfx performance just like AMD did with their gimp works performance. Oh wait.
Owen Rivera
lol
Gabriel Thompson
nvidia fixed their tress fx performance long before the code was open