>>61525126

Brian JUSTnich is a terrible CEO. Him and his beancounters that are running the show are slowly driving the company into the ground.

Other urls found in this thread:

imgur.com/a/qo9pH
techpowerup.com/235385/francois-piednoel-quits-intel
pcworld.com/article/3191401/computers/qualcomm-first-windows-10-arm-pc-coming-in-the-fourth-quarter.html
twitter.com/SFWRedditImages

>AMDtards shilling for rypoo again

They're fighting Apple, AMD, Qualcomm, Samsung, Global Foundries, and IBM, and they've enjoyed virtually no competition for the last decade so they haven't put in the R&D to stay ahead.

This is what retarded fanboy shitposters like don't fucking get. It's not even about AMD anymore. Intel is about to get fucked from all sides for their complacency and over-reliance on their fabs.

This.
Does anyone have a translation of that CPC Hardware article?

>They have internal sources at Intel and the climate there is currently awful apparently, employees are very discouraged and some feel like "they have nothing left to lose". They are making big profits now by totally screwing up the future. According to the magazine, Intel is "probably in the most delicate situation it has had to face to this day". The CEO is reducing cost so much that he's managing engineers like "supermarket cashiers", he doesn't care about taking the time to train them.

>Krzanich is very impatient and eager and keeps changing his mind about projects. If a new architecture isn't created in like a couple weeks, he gives up and cancels the project... he keeps sending contradictory instructions to the teams.

>"Fab Hell": Intel is likely going to have a 6 month delay on 10nm. Worse, even Cannon Lake is not expected to feature any significant architectural improvement. Basically Intel was just hoping AMD would keep not competing with them.

>In the picture before the last picture I posted, it is said that Kaby Lake has exactly the same IPC as Skylake. No improvement as to perf/watt but there seems to be more room for overclocking.

imgur.com/a/qo9pH

Only if you believe the marketing slides that say its "7nm". Besides, after FinFET, measuring things like gate length is pointless. The nanometer moniker to measure feature sizes has been irrelevant for some time and needs to be replaced but the marketing has been so effective I doubt it ever will. These foundries will just keep making up numbers like GloFlo's infamous 20nm rebranded to 16/14nm.

In reality GF's 7nm should match intel's 10nm density.

120% spot on.

BEOL is one thing. But FEOL is also important and Intel's 10nm is far behind FEOL-wise.
At least Sunnyvale is not that far off from Santa-Clara.

Intel is hopefully finished and bankrupt

>At least Sunnyvale is not that far off from Santa-Clara.
Sickburn

The hits just keep coming. A departure of this scale right before they develop a new architecture tells me there's something seriously screwy going on behind the scenes.

techpowerup.com/235385/francois-piednoel-quits-intel

Fucking wizards predicting what would happen 8 months later

>the dude that participaed in R&D cycle of every non-BIBELINES Intel uarch since Katmai
Boy.
BIBELINES INCOMING.

Step 1. Remove "unnessecary" parts of x86
Step 2. Replace with BIPELINES
Step 3. ???
Step 4. Housefires! And profit, hopefully.

These wizards were the first to get BOTH K8 and Zen ES.

Karma Exists and you get what you deserve eventually, that's all that's happening here.

Must have some nice connections in AMD.

And this is the first time I've seen such a detailed kabylake frontend, these guys actually give a shit about technology.

>512kb L2
>victim L3
>Kaby Lake
?

I'm using i5 3570k

That's Zen, look at the 2 AGUs and 512kb L2

helps when you piss off ibm and they decide to push their money into new nodes...

By the way, looks like IBM is going to adopt HBM too.
>Basically Intel was just hoping AMD would keep not competing with them.
Then there's no hope left.

5 decode/cycle
1 more AGU
another FPU port
Increase FP register sizes to skylake levels (184 IIRC)
Increase cachelines from 32bit to 64bit
Lower l2/l3 cycles by 5-10%

Here I see a rather simple way to get another 10-15% IPC without even touching the uOP cache

It's really amazing how AMD managed to so much with so little

This, and let's not forget the insane efficiency of Zen.
I wonder if they can significantly lower the uncore power draw.

>bit
byte*

I hope AMD goes nuclear with Zen2, technically they could build the zeppelin die somewhere around 115-125mm^2 on 7nm.

But what if Zen2 is the same size as zeppelin?
What if it's actually bigger?
If it's bigger, I think Intel will start considering something more drastic than suicide

>considering something more drastic than suicide
Suicide with bibelines?

Dark alley Skylake-X meetups where they call upon Satan to liberate their souls from this suffering

Featuring the new and improved Skylake-Xv2 22 core

The l2 was probably a desperate measure to improve performance.
In theory doubling l2 could improve performance but as a trade off they killed inclusive l3, and the mesh... Well in theory it could make things better at big core count but the results aren't quite better.
Perf improvement over Broadwell is almost zero, and cutting prices will accelerate downfall.
Unfortunately Intel isn't bringing anything new, and everything new is a flop, optane, knight landing, and so.

The so fabled 10nm will not improve performance is probably the worse part.

Are laptops going to have Snapdragons soon as their processors?

hopefully.

tired of intel's jewry.

they gonna hold the retailers hostage threating to fire up 7900x while doing automatic OC with a passive cooling solution

No, as long as Zen exists.
Unless you're talking Chromebooks n shiet.

That's already happening. pcworld.com/article/3191401/computers/qualcomm-first-windows-10-arm-pc-coming-in-the-fourth-quarter.html

Zens coming to laptops? Intel have been too long the only option for laptops.

>Atom-tier performance
But why.
Even 2/4 Zens would be better.

>Zens coming to laptops?
Yes.
That's not suprising considering how good Zen is at lower clocks/lower voltages.
And let's not forget about iGPU that actually has working 3D drivers and is better all-around.

Energy efficiency I think.

I wonder if AMD will ever release pic related on market or is just making these for CERN [citation needed]

It's their super speshul sikrit sauce to bully the jew outta HPC market.
And it's not coming any time soon.
Then why wouldn't you use fucking Atoms?

>”Please Intel, drive AMD into the ground so you can fuck me in the ass even harder”

I don't know ARM doing some things better?

Being a botnet?

They've done more upgrades to Bulldozer than that over the years with barely any R&D used on the construction cores, I think they can do even better for Zen

No idea

Any news on why he left?

>32 core Zen based APU
Please stop, the future is coming too quickly. It's getting hard to handle at this point.

No, only speculation.

I'm not sure they would ever sell that to consumers.

>The l2 was probably a desperate measure to improve performance.
>In theory doubling l2 could improve performance but as a trade off they killed inclusive l3, and the mesh... Well in theory it could make things better at big core count but the results aren't quite better.

The basic chain of logic with the former and current designs is:

> crossbars don't scale to large core counts -> use ring buses
> ring buses have longer latencies -> use inclusive L3 to keep ring traversals lower
> shared inclusive L3 causes L1/L2 evictions from foreign cores' L3 activities -> add a bunch of Cache Allocation Technology etc. bandaids
> even bidirectional single rings don't scale well beyond ~10 cores -> add overlapping rings
> overlapping rings use too much die space -> keeps rings separate and use buffered bridge between them
> inter-ring buffers bottleneck everything, and have too much latency between some cores -> use meshes! lower latencies = non-inclusive L3 possible = L2 can now be made bigger = SMT can suck a little less and AVX might do a little better on slightly larger working sets
> (you are now here)

The big ass mesh of distributed L3 is Intel's attempt to keep Xeon as the Jack of All Trades, since some things actually do benefit from having ~10-30 MB working set sizes that can stay resident in cache. However, chip-wide shared L3 just doesn't scale anymore, and Intel will be pressured to go with a clustered L3 design similar to Zen.

Do we have word on what AMD is using as GPUs for laptop Zen yet?

Shalom

It's 11CU Vega iGPU.

RIP shittel "HD" graphics

No shit.
It's 768 Vega shaders which is fucking FORMIDDABLE because Vega introduced neato things like TBR.

Just a general question, but couldn't Google deepmind ai create a much better cpu if we pumped all our current knowledge of CPUs and their functions into it and let is run endless trial and error designs?

No.

>implying that Meme Learning can beat Jim "Contract Killer" Keller in CPU design

Why not?

Zen's improved branch prediction can learn and improve performance over time.

that is just marketing bullshit.
Branch prediction is just that branch prediction, isnt like there is an machine learning doing AI and keeping up tables for loads.

They used ML to train the branch prediction before they put it into the CPUs, it isn't learning while you're using it. Just the learning by itself means that if they rerelease the same chips they can get a performance improvment.

I started hating Intel when they cut their QA budget and this happened:

> In August 2014, Intel announced a bug in the TSX implementation on current steppings of Haswell, Haswell-E, Haswell-EP and early Broadwell CPUs, which resulted in disabling the TSX feature on affected CPUs via a microcode update.

At least with AMD all cpu support all instruction set extensions with Intel you have to pay a premium for them and if bugged they just disable it no money back.

Same with the recent Kaby Lake and Skylake hyperthreading bug: you get a microcode update that lowers performance just because the kikes had to save moeny and didn't test it properly.

Ryzen has L1 bug that causes segfaults.

Source on the lowered performance?

>so they haven't put in the R&D to stay ahead.
I'm pretty sure Intel is still the company that spends the most money in R&D. Some years ago it was the case at least, it was first in term of R&D budget and it was first by far.

...

this

So absolutely nothing, then?

>give a shit
Swearing distracted from your commentary.