Raja Koduri leaves AMD for Intel

amd shills BTFO

>Shares of AMD fell 5 percent following the report. Nvidia and Intel stock fell 1.8 percent and 0.9 percent, respectively.

turns out kyle bennett was right after all, fucking shills

cnbc.com/2017/11/09/amd-nvidia-shares-drop-after-intel-hires-amds-ex-graphics-head.html

Other urls found in this thread:

hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility
twitter.com/SFWRedditImages

This is good for Ayymd.
Maybe now they will start hiring white people or at worst chinks.

Let jewtel burn with their poo in loos

When is the hiring pajeets meme going to end ?

>spend years talking about how much this street shitter sucks at his job
>now have to pretend you're excited about his appointment

did somebody call?

hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility

> Let’s start with the tension. Koduri was able to wrestle control of the graphics division away during AMD’s last leadership transition after threatening to leave the ship and take a role at Intel, something he's not shy about telling his AMD colleagues. Lisa Su caved and Koduri got the job.

> Where the plot thickens is when you look at the Koduri’s unwavering ambition. Koduri’s ultimate goal is to separate the Radeon Technologies Group from its corporate parent at all costs with the delusion that RTG will be more competitive with NVIDIA and become a possible acquisition target for Koduri and his band of mutineers to cash in when it's sold. While Koduri is known to have a strong desire to do this by forging a new relationship with Apple on custom parts (no surprise there) for Macbooks, the real focus is on trying to become the GPU technology supplier of choice to none other than Intel. While this was speculated some time ago I can tell you with certainty that a deal is in the works with Intel and Koduri and his team of marauders working overtime to get the deal pulled into port ASAP. The Polaris 10/11 launch, and all of its problems, are set to become a future problem of Intel’s in what RTG believes will be a lucrative agreement that will allow Koduri and his men to slash the lines from Lisa Su and the rest of AMD.

Kyle was right all along, but AYYMDPOORFAGS can't accept the bitter truth

But he failed. He lost the power struggle with Mommy Su and had to leave. Not sure what your angle is here. Why would AMD fans be mad at losing the guy in charge of their graphics division that's done nothing but pump out flop products for years?

All you wrote there was that the poo tried to push AMD deeper into the loo but jensen in drag didn't let him hence saving AMD from total doom as a mobile chip maker division from apple.

Literally where's the problem in this?

>a poo in loo leaving AMD for intel is actually le BTFO for AMD
shilltel shills will delid this.

Intel will using raja poo as it thermal paste now

Kyle, are you still trying to make your shithole relevant?

>jensen in drag
kek

Good riddance. Under his watch RTG pooed their competitive edge down the loo releasing a series of increasingly uncompetitive products. Who cares if designed the RV770, it may have had amazing perf/w, but it still slower than the abomination that was the GT200, and saw AMD loose more market share. Raja was chief engineer up until 2009 and returned in late 2013. The graph tells the rest of the story.

He was working on a limited budget with AMD

Now he has close to infinite shekels with Intel

I don't think he'll upstage le jacket man though

>it may have had amazing perf/w
Isn't that the most important thing™?
>but it still slower than the abomination that was the GT200
Small die.

>I don't think he'll upstage le jacket man though

I don't think you understand the sort of money Intel is will to throw at him to make a working gpu given what Nvidia has been doing to Intel in the datacentre for the last few years. Given he is unlikely to be designing gaymen focused chips one of the biggest hurdles of consumer gpus - the driver stack - won't be such an issue for Intel.

Is it even legal just jumping straight over to a direct competitor?

>Nvidia has been doing to Intel in the datacentre for the last few years
Nothing?
They don't make CPUs worth mentioning.
It's called "poaching" and yes, it is legal.

>Isn't that the most important thing™?

Only when Nvidia says so. See: it didn't matter when the 290x murdered the titan with very similar power draw but suddenly mattered when the 390x went toe-to-toe with the 980.

Why wouldn't it be legal if there is no contract saying that?

>They don't make CPUs worth mentioning.

GPUs have been carving up Intel's monppoly for a while now - why do you think Intel keeps pushing AVX2 (and tries to push AVX512)? It is to slow the encroach of gpus which (for the most part) are flatout better than cpus when its ALL VECTORS ALL THE TIME.

>GPUs have been carving up Intel's monppoly for a while now
They did fuckall. It's a fucking coprocessor ffs.
>why do you think Intel keeps pushing AVX2 (and tries to push AVX512)
The same as TSX: to get into some lucrative niches claimed by RISCs.

I don't think you have any idea of how far ahead nvidia is compared to both AMD and Intel in terms of GPU computing either, user.

Just throwing money at it won't instantly make up for years of research just like throwing moar cores into something doesn't make it faster by proxy aka Amdahl's Law.

If Intel manages to get close to current AMD performance that will already be a huge win and it will take at least half a decade because whatever tech we have today in the market was in the labs for the past 10 years being developed.

>I don't think you have any idea of how far ahead nvidia is compared to both AMD and Intel in terms of GPU computing either, user.
GPU computing is fucking niche.
Everything with at least somewhat large dataset (I.E. everything but meme learning) gives zero shits about GPUs.

x86 is an increasingly irrelevant market and Intel is flailing desparately since anything they do outside of x86 chips fails miserably.

Does this mean that we will have epic perf/$$$ GPU's like the epic 7xxx series back?

>x86 is an increasingly irrelevant market
Right after Intel posts record datacenter revenues and half the market is jizzing at EBYN and trying to lick Lisa's feet.
>Intel is flailing desparately since anything they do outside of x86 chips fails miserably.
They are trying to diversity their portfolio because their main rival is back.

>GPU computing is fucking niche.
That is only beacuse GPUs are hanging off a slow-ass bus and have a very little memory. It takes like 1.5 seconds to full a buffer of a Tesla with 24GB of ram. That is essentially lost time.

However - if a GPU could achieve full data localit ( thing Radeon SSG ) or GPU arithmetic block was included into the CPU and conneted to a system ram, then GPU computing would become much more intresting option.

I think that both AMD and Intel hint at making GPU assembly to be a part of the CPU native instruction set.

Along with native support for vector types and ability to choose a context in which a function can run - things might get intresting.

Imagine
void foo() // defaults to CPU
{
}

gpu void bar() // runs on gpu
{
foo(); // but switches to cpu to run this function
// shitload of arithmetic
}


No copying, no buffers, uniform memory architecture. CPU + GPU on a single die with a daisy chain of caches : on die->hbm->system.

You've just described HSA.
That's new programming model.
People hate new programming models.
Besides, system memory is magnificent levels of slow for GPUs.

>Besides, system memory is magnificent levels of slow for GPUs.

That is why you'd have a block of HBM for a gpu/CPU as a cache level. Making it, essentially, an L3 cache. 4-8 gigs of it would even allow to make systems with no system memory. Essentially making a cpu/gpu/hbm unit to a SoC.

So you want a big APU with very limited memory capacity?
Besides, HBM can't work as L3 because of DRAM level latencies.
GPUs will never be useful for anything but simplistic numbercrunching with very limited datasets.
Meme learning and some HPC niches, that's about it.

So basically the ENTIRE POINT of HBCC on Vega? That when bolted to a radeon SSG chip would be an absolute monster but as previous user said HSA is currently dead because shit be hard yo.

SSG is strictly about latency, not bandwidth, useful for video editing only.
Most code running on the GPUs needs bandwidth, a lot of it.