Intel to use Radeon for its next gen igpu

Intel to use Radeon for its next gen igpu.
tweaktown.com/news/55341/amd-radeon-gpu-tech-power-intels-next-gen-igpus/index.html

Other urls found in this thread:

investor.nvidia.com/secfiling.cfm?filingid=1193125-11-5134&cik=1045810
bloomberg.com/news/articles/2016-04-06/nvidia-demands-qualcomm-pay-up-after-demise-of-352-million-unit
bloomberg.com/news/articles/2016-05-02/nvidia-settles-with-samsung-over-graphics-processing-chips
gpuopen.com/gcn-shader-extensions-for-direct3d-and-vulkan/
hardwarecanucks.com/forum/hardware-canucks-reviews/73945-gtx-1060-vs-rx-480-updated-review-23.html
twitter.com/NSFWRedditVideo

There were rumours about this some months ago.
The IP Lease should do well for AMD. Whether this be simply income from the lease, or some IP lease for CPU tech from Intel, which would push some way for Zen+

Will be interesting either way.

Knowing Intel's market tactics, and AMD's awful negotiating skills...

AMD are probably going to end up paying Intel for each processor they make, instead of Intel paying AMD a licence fee.

If Intel were happy with what they were getting out of the nVidia deal, they would have stayed where they were. Instead Intel are now leasing IP from AMD. That's a strong statement in itself depending on how deeply you want to read into it.

IP lease isn't really something Intel can bamboozle AMD on. It's just a flat deal. We pay you money, you let us use your things. AMD already covered the costs when they developed the IP. So AMD really aren't going to lose out on this. No matter how much they get from the deal, it's more than they would otherwise get without it in place. It's a good thing for AMD, covers some operating costs, especially if it's anywhere near the $1.5b that Intel paid out to nVidia over the 5 years they were dealing with eachother. Depending on specifics, AMD could be getting more than that out of Intel.

apparently it's just a patent licensing deal.

the concept of iGPUs are protected by patents, and both nvidia and amd hold patents that allows a company to make iGPUs, and intel used to license from nvidia but nvidia hiked prices so they want to jump ship to amd apparently

i dont think this means radeon gpus on intel cpus

AYYYYMD will finally get drivers

>but nvidia hiked prices

The nVidia lease expired this year. Now whether or not nVidia did hike up what they wanted for a new deal I don't know, or Intel have just seen that AMD know how to do iGPUs better than nVidia does. By a wide margin at that too. We don't know.

The deal Intel held with nVidia was for $1.5b over 5 years.

>investor.nvidia.com/secfiling.cfm?filingid=1193125-11-5134&cik=1045810

>Under the patent cross license agreement, Intel has granted to NVIDIA and its qualified subsidiaries, and NVIDIA has granted to Intel and Intel’s qualified subsidiaries, a non-exclusive, non-transferable, worldwide license, without the right to sublicense to all patents that are either owned or controlled by the parties at any time that have a first filing date on or before March 31, 2017, to make, have made (subject to certain limitations), use, sell, offer to sell, import and otherwise dispose of certain semiconductor- and electronic-related products anywhere in the world. NVIDIA’s rights to Intel’s patents have certain specified limitations, including but not limited to, NVIDIA is not licensed to: (1) certain microprocessors, defined in the agreement as “Intel Processors” or “Intel Compatible Processors;” (2) certain chipsets that connect to Intel Processors; and (3) certain flash memory products. Subject to the terms and conditions of the patent cross license agreement, Intel will pay NVIDIA licensing fees which in the aggregate will amount to $1.5 billion, payable in annual installments, as follows: a $300 million payment on each of January 18, 2011, January 13, 2012 and January 15, 2013 and a $200 million payment on each of January 15, 2014, 2015 and 2016.

NVIDIA IS FINISHED AND BANKRUPT

Intel is not using GCN arch, or any other AMD arch for their IGPs.
This licensing deal is regarding IP fundamental to what a GPU itself is, it is not specific to any architecture. Its just legal maneuvering through patent law.

Intel is essentially licensing IP that says : A GPU does X,Y, and Z.
They were previously licensing similar and equally worthless IP from Nvidia.

That sounds like patent trolling.

Do you just spout words and hope they make sense to someone who then validates your shite?

Patent trolling is claiming a patent infringement, then burying the company in paperwork and legal shit.

This is a simple license deal. What is so hard to understand about that?
This is equivalent to me knowing how to complete a level on Mario. You give me a snickers bar, and i tell you how I do that level, then you develop your own way from my way or just flat use my way.

So, can we assume... Intel will throw weight behind DX12, Freesync will have Intel's backing, Intel might use the GPU as an OpenCL co-processor, etc?

Freesync always had Intel's backing.

As for the OpenCL co-processor. That could well happen.

Simple licenseing deal to allow Intel to keep making their iGPUs since their deal with Nvidia expired this year. The rumor is Nvidia wanted more money and AMD wanted pretty much what Intel was paying before. If it's the same deal as before it's 1.5billion over 5 years, which for AMD is actually going to be huge, a FREE 250 Million every year. They could use it to pay off their debt, cover costs, or pad numbers. All good for my AMD stock huehuehue.

Nvidiots BTFO!

I'm referring to:
>IP fundamental to what a GPU itself is, it is not specific to any architecture
As patent trolling.

If true, that means AMD and nVidia don't provide any smart technical solutions.
Just that they hold a very vague non-technical "patent" on any kind of GPU.

ie: they came up with the idea that graphics could be done on dedicated hardware and now nobody else can make graphics hardware without paying them.

I seriously doubt this is the case here though.

Intel doesn't need to back DX12 honestly. DX12 is objectively leagues beyond DX11, whether or not you like it or agree DX12 will completely phase out DX11 as soon as is convenient for API clients to build with it in mind
Intel will definitely give DX12 a boost in popularity but Intel has always been the first to adopt new tech and the last to make it usable and client-friendly, they're basically the dickwaver of tech companies
As far as hardware goes, as Intel is pushing for high-quality igpus lately with the iris=750 shit I'm not surprised they're going with AMD as soon as they could

>Just that they hold a very vague non-technical "patent" on any kind of GPU
That's what a patent is.

You realise Apple hold the patent on "A rectangle with rounded corners" right?

Apple uses them so yes

>AMD APUs have shit processor but great graphics
>Intel mobile processors have shitty graphics
>Combine them and make an APU with great graphics and processor
It would be good, but I doubt that's the case. They probably just want to borrow some ideas from GCN or prevent patent trolling.

Nvidia was patent trolling Samsung and Qualcomm over integrated GPUs , so yes Intel wants some protection against that so they struck a deal with AMD
bloomberg.com/news/articles/2016-04-06/nvidia-demands-qualcomm-pay-up-after-demise-of-352-million-unit
bloomberg.com/news/articles/2016-05-02/nvidia-settles-with-samsung-over-graphics-processing-chips

>iGPU
Who gives a fuck?

>I seriously doubt this is the case here though
This is exactly the case here. Seriously, the industry is a minefield of patents. Intel and nvidia have an INSANE amount of bad blood between them so I can imagine NVidia denying the patent purely because Intel doesn't have anything to leverage anymore.

Remember the whole chipset bullshit that they had? There was also project denver, the in house CPU design NVidia had for tegra. That was supposed to support x86 using some really cool tech. I'm sure they've had many other court cases as well.

Seriously. Fucking poorfags.

>What are laptops?

AMD ones are pretty fucking 1337. I have an A10-7870K and get 40-50 FPS in BF4 in low 1080p settings (except textures set to ultra). Don't plan on buying a dedicated graphics card for a while.

Oh yes, buy a $400 graphics card instead go... I mean guys.

It's not even poorfags it's just iGPUs are already way better they need to be in the first place.
For any actual gaming or work that requires a decent GPU you will need a dedicated one, iGPUs will never catch up to even low mid tier dedicated GPUs.

>are pretty fucking 1337
you need to be 18 to post here friend

see

see
learn to read the entire my underage friend

>iGPUs will never catch up to even low mid tier dedicated GPUs.
PS4 pro's APU is pretty beefy though, it's performance is between rx 470/480. Raven Ridge with HBM2 and Vega cores will further advance integrated GPUs

Oh sorry didn't even see you were replying to him. Anyway I would settle for those frame rates at 1080p. Most I play now are indie games or RTS vidyas anyway.

>
>>iGPUs will never catch up to even low mid tier dedicated GPUs.
>PS4 pro's APU is pretty beefy though, it's performance is between rx 470/480.
lelno. More like an Rx 460. Rx 480 is like 3X better performance.

Raven Ridge with HBM2 and Vega cores will further advance integrated GPUs

it was known for a year now, their contract with nvidia ended and they decided not to continue because nvidia suck at inter corporate relations

nintendo for a hell of a ride, the last company that didn't work with them

>YFW Intel will join the GPU war
>TFW Intel, AMD and Nvidia

but at low you can pretty much already get the same or at least very similar performance with the current intel iGPU
The main issue with iGPUs however, even if you only do light gaming, is that while a good processor will last you at least 5 years the iGPU won't.
I mean sure there is no harm at all in making them more powerful but they will continue to get outdated at a pace that if you want to game at all you will need to buy a gfx or a brand new processor every 2 years
>he is actually comparing the PS4 to a computer
shake my head to be honest familia

>Intel is essentially licensing IP that says : A GPU does X,Y, and Z.
for 1.5b? woah jews gotta jew

PS Pro APU has 4.2 TFLOPs
rx 480 has 5.8 TFLOPs
rx 470 has 4.9 TFLOPs
rx 460 2.2 TFLOPs

True, not quite a 470 but sure much closer to it than 460

PS4 is a computer, it's not an IBM compatible though.

stupid question, what if amd opens up drivers for linux would it be possible to emulate ps4 with modified version on amd cards?
gcn =gcn?

NVidiots on suicide watch!

that is a stupid question

Lately they've been getting good enough to give many fucks about, nvidia replaced their 750 with their new card but Intel recently replaced the 750 with a CPU
AMD APUs can play new games at decent settings and get 30-60fps, try that on any normal Intel igpu
If this can replace a $100-200 gpu for basically free, it's worth a little hype

but ps4 games run on openGL and gcn architecture with x86 processor
the only difference is software

Intels Iris pro is already beating low-mid tier gpus
I hope a little APU in Intel can bring it up even more
The PS4 is nothing close to a 480 but don't forget the 480 is AMDs $250 flagship gpu

I don't think PS4 games use gcn bytecode directly much (there can be some intrinsics like gpuopen.com/gcn-shader-extensions-for-direct3d-and-vulkan/ ) but they don't use OpenGL, PS4 have low level APIs like GNM, GNMX and its own shading language PSSL

Freesync and opencl already have Intels backing

I'm not sure where qualcomm and arm stand on freesync though

DX11

On release (July 2016), the GTX1060 was around 12% and 8% better than RX480 in 1080P and 1440P. Now (Dec 2016), the GTX1060 is 2% and 0% better than RX480 in 1080P and 1440P.

DX12

On release (July 2016), the GTX1060 was around 3% and 4% worse than RX480 in 1080P and 1440P. Now (Dec 2016), the GTX1060 is 6% and 6% worse than RX480 in 1080P and 1440P.

hardwarecanucks.com/forum/hardware-canucks-reviews/73945-gtx-1060-vs-rx-480-updated-review-23.html

Nvidia BTFO

Raven Ridge is 12CU, it'd have to be clocked pretty high to come anywhere near the RX 460 in practical performance.

768 ALUs x 1400mhz clock x 2 ops per clock would give it 2.15TFLOPS
I wouldn't count on Raven Ridge haven't the same number of ROPs and TMUs either, so real world performance is going to be lower unless the arch is radically different.

AMD isn't trying to pursue serious performance with their consumer APUs. They're primarily aiming for a design that is economical to produce, and can scale down to mobile TDP ranges. If they really wanted to create a 95w desktop chip with considerable IGP performance they could feature 20+ CU and keep the clocks low.

The PS4 has its own high level and low level graphics language. Its not OpenGL.
The Xbone and PS4 both use ARM TrustZone processors as their DRM manager, and no one has come close to breaking them. It will be a very long time before you see anything close to emulation for these systems.

He said PS4 Pro, which is in fact 36 CU. Its just older gen graphics IP, and not clocked as high.

ps4 uses a proprietary api called gnm

it's roughly similar to vulkan, dx12, and metal I think

AMD cards come with free performance upgrades :DD

>OpenCL co-processor

I shall refer Nvidiots to this when the 1080Ti invariably comes out and beats Vega.

How does anything in my post relate to that?
The 1080ti probably will maintain the performance edge though.

Probably because "the best gpu" for nearly $1000 from nvidia will be argued over "the best price gpu" from AMD, and people will both claim that nobody wants a richfag card and that nobody wants a poorfag card

>I wouldn't count on Raven Ridge haven't the same number of ROPs and TMUs either, so real world performance is going to be lower unless the arch is radically different.
Vega is meant to be a pretty big step away from GCN afaik.

Neither Vega nor Polaris are names of a architecture generation. They're marketing names for groups of dies.
Polaris and Vega are both 4th generation GCN.
Any arch updates would still be in the 4th gen spec. The biggest change we could see in either Vega part would be a new ROP design.

>The biggest change we could see in either Vega part would be a new ROP design.

Isn't 2x rate fp16 already essentially confirmed for Vega?
Also, what's even a shortcoming with the existing ROP design (apart from the fact that AMD has been skimping on them somewhat as of late)?

This isnt rwal... This cant e real. FUCK YOU AMD FUCKING SHIT CANCER

Polaris is Graphics IP v8.0 family like Fiji, Vega is IP v9.0

That explains the similar perf/watt of Polaris and Fiji

The way AMD's arch connects the ROPs together with this crossbar is complex, and complexity tends to be a huge limiting factor. If the number of ROPs is going to be limited then you need to make sure they can push enough pixels/clock to not bottleneck the whole GPU. Unfortunately this isn't what AMD did. We saw Fiji be incredibly limited compared to Hawaii because they had the same 64 ROPs. In some metrics in fact the Fury X was only barely ahead of the R9 390X.

If the big Vega card had 64 ROPs without pushing more pixels per clock it would be another blunder. Nvidia will be safely topping benches.

Fiji had several bottlenecks including surprisingly enough realizable external bandwidth.

Is it feasible in contemporary architectures to have ROP counts not cleanly divide memory interface widths, given the tight bindings between L2 slices and the ROPs?

It's hard to see the 2048b Vega going with either 128 or 96 ROPs, just like we didn't see 64/48 on Polaris.

That touches on a lot of absurdly complex BEOL circuitry that I know nothing about. I'm not sure how that would be routed.

Come on dude, I might own an nvidia graphics card but I damn well don't own nvidia stock.

Was Fiji's problem really something like ROP/crossbar routing or just that the unit allocations weren't evenly cut when AMD had to back off from 20nm?

It's problem was that it was made by amd and amd only makes garbage

You can probably collect more shekels posting elsewhere, Abraham Shillingstein.
We're already at the saturation point for Nvidiots here on Sup Forums.

Pixel fillrate is entirely an ROP limitation.
Tessellation performance comes from the front end command/geometry logic. Though AMD has never been strong here, its really only Nvidia pushing tessellation.
The reason why Fiji has been demonstrated to not have the bandwidth that the memory should be providing escapes me. The 4 packages of HBM clocked at 500mhz(1,000mhz) should be providing 512GB/s. In a bunch of synthetic benches its showing much lower figures, pretty much 25% lower.
There is likely some nuance to the memory controller at fault.

GDDR5 based GPUs can usually get about 75% of theoretical bandwidth, as shown here:
The problem with Fiji was that it was only getting ~65% of its theoretical limit, for who knows what reason.

Memory bandwidth usage is also ROP limitation.

There is this comment from an*andtech forums
>The Memory Controller: While Hawaii gained a 512-bit bus the memory controller shrunk in size compared to the one in Tahiti. This limited the memory speeds it could handle relative to Tahiti. Fiji pretty much retains these components from Hawaii. This is why Fiji only really taps into 333-387GB/s of its 512GB/s (32-54% loss) and Hawaii only 263GB/s of its 320GB/s (22% loss). So if we take 384GB/s for Grenada and subtract 22% we end up with 300GB/s or an 11-29% advantage for Fiji over Grenada in terms of actual memory bandwidth.

makes sense? no idea

Maybe OpenBSD will make AMD cards work finally.

That makes no fucking sense.
Tahati had a 384b bus over 6 controller segments while Fiji was 4096b over 4.
It wasn't like they could just swap out the physical layer transceivers for slower wider sets and leave everything else somehow the same despite cutting their controller blocks by a third.

Intel isn't gonna increase CPU performance on the von-neumann architecture without involving the GPU or using exotic materials, they know this and everyone knows this.

This is probably to make Intel(AMD?) GPUs NUMA friendly so the GPU can be put to good use too along the CPU, Intel has no decent framework for this but AMD has, but lacks widespread support.

Might be good.

That's retarded logic.

Intel are using AMD's IP to improve their own CPUs, right?

... Which AMD also produces and competes with Intel on ...

So if Intel manages to sway over more sales from AMD to themselves than they're paying in IP costs, then obviously AMD are going to 'lose out' as you put it.

(That said I doubt this is real)

The PowerVR route? They're rich so its worth a try.

no, it's purely legal CYA in a war where stupidly broad things can be granted multi-decade monopolies.

> I hereby propose the incredibly novel idea of having a processor with external memory do matrix transforms on data to make pretty pictures, with these entirely non-obvious optimizations...

>Is blatantly retarded
>Calls others retarded
Sup Forums never changes.

>Intel are using AMD's IP to improve their own CPUs, right?
Wrong.
iGPUs are the focus here.

In the deal Intel had with nVidia, nVidia had access to Intel CPU tech, Intel had access to nVidia GPU tech. And Intel also paid nVidia $1.5b

I believe Nvidia actually got raped by Samsung after Samsung countersued them over Graphics RAM patents.

They already do, faggot. For more than a year now.