Does AMD Finewine actually exist or is it just a bunch of pajeets shilling?

Does AMD Finewine actually exist or is it just a bunch of pajeets shilling?

Other urls found in this thread:

anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3
techreport.com/review/26239/a-closer-look-at-directx-12/3
nvidia.com/page/legacy.html
twitter.com/NSFWRedditGif

It's not an actual technology but AMD cards to tend to age better than Nvidia cards, if that's what you're asking

I don't know about AMD, but the running meme that Nvidia kills their old series through new drivers has been debunked by several reviews.

You can look it either way.
One could also say that each time AMD fails to extract the maximum out of their hardware and need time to do that.
Meaning you don't get the full power available.

Basically yes, especially since they seem more committed to keeping Crossfire alive than nvidia is to keeping SLI

>780ti
>980
>both losing to a 290x with 50mhz core boost (aka 390x)
>both Nvidia cards beat every single 290x when they were released

Make up your own mind.

>AMD cards tend to age better

>AMD releases card before it is "ripe"
>Comes into age 2-3 years later when they finally release drivers that work
>"AMD cards age better!"

no, but technically yes. since amd is a fan of re-branding, optimizations they do for "newer cards" often get passed down by default since they're just re-branded.

for example, the 390(x) tessellation optimizations amd did in the drivers also benefited the 290(x) series since the 390 series where simply re-brands of the 290 series.

nvidia use to be a fan of re-brands as well, but lately hasn't been much of a re-brand whore like they use to be. so a lot of work nvidia does for one series won't be passed down to a previous series. leaving them with little optimizations since nvidia now has to go out of their way to purposely optimize previous, older generations.

so all you can hope for is nvidia not gimping previous generations and at least keeping them the same prior to the launch of a new title.

It exists because of how AMD builds it's cards versus Nvidia.

Nvidia likes to develop for the immediate present.

Their cards have tiny bus widths, no hardware schedulers, with little to no compute. They also have tile based rendering similar to how a PS4 pro or a mobile GPU Renders graphics. In exchange they cram a shit ton of cores or make smaller chips. This gives them a shit ton of thermal headroom and let's them get such crazy clickers because the hardware is streamlined and focused efficiency.

As games become more technologically demanding, what was great at the time of release loses. Because AMD gives you larger memory busses, helped develop the APIs so they want to include more compute and have hardware based task schedulers which offer more performance then software based task schedulers and compute.

So AMD builds their cards to last, but harder to develop for when no one takes advantage of their hardware. So AMD drivers start off not as optimized, and get optimized further with time.

That's what "finewine" is really all about. Do you want to update your GPU every year with forced obsolescence? Or do you want to hang on to it for a bit longer?

amd gets a lot of shit for re-branding but people need to release there is a huge benefit to this, and its optimizations of previous generations. it makes it A LOT easier.

also to note worthy is how amd has been using a somewhat standard universal architecture over the years. gcn. yes, each generation brings improvements and modifications, but the base architecture is the same. so it makes it easier for amd to maintain and optimize previous generation of cards. a tweak for one generation can easily benefit all previous generations if its a tweak to the base of gcn,

> tile based rendering
this isn't a negative. its a plus. so much a plus amd adopted it for vega.

Nvidia is doing the same thing, sort of, since Kepler, all the way up to Volta.

They change a few things, but at it's heart it's still Kepler.

I never said it was. It helps keep power consumption down with no discernable visual or performance hit.

I was just explaining Nvidias designs and how they do it.

>Release before ripe
>Still competative with green ale
>Ages better thab green ale.

the way nvidia designed its architecture since keplar made it actually require more work on nvidias part. a huge example is the removal of the main hardware based scheduler and moving it over to software. each generation since keplar nvidia has moved more and more off the hardware and over to software. this requires far more work on nvidias part, and also means a lot optimizations for one generation hardware to be generation specific. there is a reason why nvidia releases "game ready" drivers. they sorta have to.

amd has gotten a lot of crap over the years for the "lack" of optimize dx11 drivers when in reality, amd can only do so much. gcn's architecture is very hardware based and better suited for api's that can gain access to its hardware directly. amd doesn't have much compared to nvidia to tweak for dx11. its why amd receives such a huge boost in dx12 and vulkan type api's as those api's favor more direct hardware access rather than software. keplar and above is very software based and they really don't get much boost from more low level api's. which is why nvidia either gains very little, or actually suffers regressions. which has been noted numerous times, especially with maxwell. so much so things like async nvidia has disabled completely on maxwell based cards.

now there are benefits. with something like a hardware based scheduler you can't really tweak it after its built. but software you can. its a lot of work, but at least with software you can tweak it on the fly. the downside its card generation specific. there is a performance impact, but with the advent of multi-core cpus becoming the norm, that impact is negligible at best. people praise nvidia for their "multi-threaded!" drivers but nvidia HAD to do it since they adopted software based schedulers with keplar. they needed to make sure all negative performance impacting areas of their drivers where eliminated as much as possible.

> One could also say that each time AMD fails to extract the maximum out of their hardware and need time to do that.
I'm really curious if this improvements coming only for games or other software using gpu also benefits from it? Does anybody have tested, say, what cuda/opencl performance was on release day and few years later. Any links about this information appreciated.

anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3
like the andetech article mentions, nvidia was able to get away with moving the hardware scheduler off of the gpu to software because the hardware scheduler wasn't being fully utilized. for dx11, dx10, dx9, extra, a hardware scheduler was overkill for what was offered at the time. they were not complex enough to benefit from all the power a hardware scheduler has to offer. dx11 and its predecessors where not parallel type api's like dx12 & vulkan are. things like asynchronous compute wasn't really possible with dx11 and below so you didn't need to worry about running multiple things in parallel or a whole a lot of different things singly at the same time. such as compute + graphics. things like that are very complex and need a hardware scheduler to get the most amount of performance. they also didn't think an api would ever come out that would provide more low level hardware access.

nvidia downplayed compute and didn't envision a future api to ever include such features. they really thought api's would continue like dx11 and they didn't think developers would want more access to the hardware. techreport.com/review/26239/a-closer-look-at-directx-12/3

normally the problem with a software scheduler is overhead, but since dual and quad cores have been the norm since 2007, nvidia was able to get around this by making their drivers multithreaded for dx11.

a software based scheduler was more than enough for dx11, and it also gave them the ability to modify the scheduler on the fly. unlike a hardware scheduler where once built you couldn't modify it. you could only work around it. if you wanted to release an updated version, you had to release a whole new card.

dx12 fixes the overhead issue for amd. their drivers for dx12 are multithreaded (have to be for dx12) so amd no longer faces higher overhead and nvidia no longer has this as an edge over amd. driver head between the two in dx12 are very similar. they're very low. but this time around amd does have an edge with hardware better suited for the parallel nature of dx12 & vulkan. with dx12 a hardware scheduler is better suited.

a lot of games share the same engine. for example, a game utilizing unreal engine 3, most games will perform similar to one another. unless the developer is absolutely inept. so if a new game comes out using say unreal engine 3, and since all nvidia cards since fermi already had optimizations for unreal 3 engine, they're going to perform pretty well and in performance you would expect. but newer engines that's when you can start seeing oddities. especially with indie games that use unique engines, which not to many reviewers actually bench with. they mostly bench with an armada of the same games they have been benching with for the last one - three years and predominantly with AAA titles that use common engines that where used at the times of previous generation nvidia cards.

nvidia actually stated it themselves, when a new generation comes out, the previous generation drops down to secondary tier. when another generation comes out, the previous, previous generation moves down to legacy.

you can see the legacy cards here:
nvidia.com/page/legacy.html

both the previous, and legacy cards, no longer are first priority for optimizations. each tier down they go, the lower in priority they become in regards to bug fixes and optimizations. previous generation receives equal priority to critical bugs as the new generation but come second in performance optimizations. they still receive optimizations obviously, but at a slower rate. legacy cards are pretty much on life support.

so yes, there is no evidence of nvidia purposely gimping their cards, but there is clear evidence of nvidia not really providing much performance optimizations for previous cards when a new generation comes out.

you can clearly see this by reading nvidia patch notes. most of the drivers this past year, when nvidia released "performance optimized drivers" have been heavily targeted for pascal, not maxwell.

fuck off Steve.

m-muh finewine

its not amd finewine, its kepler being shit.
maxwell is holding up fine, 980ti is still faster than fury x, 970 is still faster then rx 470 and 980 is still faster than 290x.

Can you imagine the shitstorm if the Rx480 had at least one cherry picked benchmark beating the 980Ti on its launch day?

We've got a couple today, various levels of shilling to be mindful of, but it won't matter, since it's on a rebrand 6-12months too late to make a lasting impression on our collective short attnetion span.

finewine is a myth. most of the 'improvement' is from rebrands being re-benched with overclocks and amd sponsored games. you would see a similar effect on nvidia's side if you only benched with max oced pascal cards and gaymeworks games.

taking marketing from amd/nvidia at face value and brand loyalty overall is a cancer.

it's not a myth. It's a consequence from a small and underfunded driver development team.

...

>GTX780 barely runs games at acceptable frames nowadays
>R9 290x is still relevant however

>not a single Kepler card

>
>>GTX780 barely runs games at acceptable frames nowadays

>tfw my GTX 670 can run games just fine on Medium/High

23.99 fps at 720p is not acceptable shlomo

nice shilling you got there, here's an updated version from the same source which happens to be exactly the case in point.

>old as fuck 7970
>no keklers

I ALMOST bought a 770, the mere fact i considered it even sends chills down my spine.

FineWine is a meme. AMD just decided to use for marketing purposes it because they're still on a version of Graphics Core Next.

Fine wine is just the fact that AMD GPUs have more transistors than equivilently priced nvidia GPUs.

Nvidia spends a lot of money on driver development for the first wave of reviews, and little support for cards >1 year old.

AMD's architecture is fairly consistent back to GCN 1.0 7900 series. So generally speaking driver improvements will affect all those cards.

AMD's driver team was generally weaker than Nvidia for the 3 gaymes the reviews will post benchmarks for the week of launch. But a good graphics card isn't about getting the best scores with beta drivers so you can post a review. Unless you're a marketing whore, who pays shills to sow dissent in their consumers.

Generally speaking marketing is the better option for sales which is why nvidia has done better than AMD for the last decade. Besides having better drivers on launch day and feeding reviews money, they always have the "best graphics card" which is a super expensive large die card. Which makes people think that all their cards are better which is wrong. Nvidia does lots of other shit too.

>R9 290x is still relevant however
I have an XFX R9 290 with a lifetime warranty, so I hope the card runs things well enough to warrant using it until it dies.

If you stick with 1080p resolution you'll be fine for a long time user.

Better idea, use it until vega then """"""kill"""""" it. Next initiate lifetime warranty for free vega. Oh and i mean wait a bit forst tho so they dont try to get sneaky with you and convince you to except one of their 580s

290X is relevant, but much morr so if its the 8GB model.
>t. in my machine I have a 290X and a 260X as a secondary monitor driver

the 8GB model looks beastly as fuck. I wonder if I can still replace my GTX970 with it though.

Yes. You can get them used for relatively cheap afaik and mine is holding up well. The 8GB +2GB of vram allows me to run 3x1920x1200p60 in tandem and play games at full framerate

it would be useless to do that unless you're running into vram limits. an OCed 970 is much faster than the 290x.

>an OCed 970 is much faster than the 290x.
Is it? would probably like a word with you.

I bought 770 2gb. JUST fucking kill me right now.

F

Im sorry.

Learn to read, pajeet.

Fuck. I am so tired of waiting for Vega. YOU DAMNED GANG OF PAJEETS GIVE ME A GOOD RADEON.

King poojeet is currently constipated.

>I got nothing, I was caught lying
>I know! I'll call him pajeet!
>That'll learn him!
You sure showed me user.

See pic related. Hawaii (R9 390X) was released on 2013. GTX 970 was released on 2014.

forgot pic

Y-you can't see past 3.5gb anyway goyim.

Not sure how I'm lying, the 970 pretty definitively beats the 290x when you compare them apples to apples (i.e stock vs stock, oc vs oc). You haven't even cited any evidence that says otherwise.

970 btfo lmao

learn to breath with your mouth closed.