Is Broadwell the best Intel architecture?

Is Broadwell the best Intel architecture?
Why did Intel discontinue the L4 cache?

Other urls found in this thread:

ark.intel.com/products/93339
twitter.com/NSFWRedditGif

>Why did Intel discontinue the L4 cache?
Don't ask questions, be a good goy!

Doesn't matter Zen seems to win.

What was the deal with Broadwell? Was it just for laptops?

There were only two desktop Broadwell CPUs (i5-5675C and i7-5775C) and they had 128 MB of L4 cache on a separate die

Wasn't that for the IGP? Are they actually faster than Skylake?

That little extra 128MB cache makes worlds of difference, highly clocked 7700k can't even compete with stock broadwell c.
If only Intel weren't run by greedy Jews every couple would have extra cache .

Shit's extremely expensive to make.

How come GPU's have so many cores, but not CPU's ?

It's extraordinary easy to parallelize drawing triangles.

Because the cost per mm2 isn't feasible, the L4 is practically an extra 150mm2 for 10% more performance.

That's unacceptable , even if the L4 isn't pure logic.

doubt it makes much difference given the margins. i can absolutely see intel bringing l4 back for coffee lake/cannonlake after they get btfo by ryzen.

The large 128MB eDRAM dies are expensive and only improve performance in memory limited applications. There are some Skylake SKUs with L4 cache, but they're currently BGA only at the moment.

See: ark.intel.com/products/93339

i'm glad i bought the 4790k as soon as it came out. still going strong after all this time

Maybe it has something to do with costs? You know like every other consideration involved in making highly complex ICs?

>Is Broadwell the best Intel architecture?
Haswell, Skylake, and Kaby Lake are technically better, but arguably by not enough of a degree to really matter.

>Why did Intel discontinue the L4 cache?
Because people who want to spend more money for graphics will just get even an entry-level dGPU that performs much better.
eDRAM L4 costs too much and doesn't matter enough for enough applications.

HBM2 in APUs like Raven Ridge probably have a better chance of happening, though honestly even they might end up sticking with GDDR5 or DDR4.

Ironic shitposting is still shitposting

>7870 levels of performance on an APU
Isn't it against AMD's best interests to make their own GPUs redundant?

i'm glad i bought the 2500k as soon as it came out. still going strong after all this time

Presumably a ~12.5 TFLOPS Vega 10 and a 6-7 TFLOPS Vega 11 will be out before anything even comes close to impinging on Polaris 11/RX 460 levels (~2 TFLOPS) in an APU.

The ps4 pro already has a ~4Tflops APU, but with shitty cpu cores

That's downclocked poolaris.

Okay. It shows that they could make a much more powerful APU at decently cheap price point, though

PS4 APU is not actually APU, it has GPU on separate die.

Show me a delided PS4 APU to prove it.

I don't think anyone has any idea whether Vega 11 will be half a Vega 10 (1x HBM2/256b GDDR5 with ~6 TFLOPS) or three quarters (384b GDDR5 with ~8 TFLOPS).

it's more about thermal limits than cost. 150+W APUs wouldn't sell in the hugest volumes.

you are such a turbo-nigger I'm sure you voted for Hillary twice.

>it's more about thermal limits than cost
Why not just include a good cooler?

It's not a fucking full tower, it's a small box with tiny vertical height

He means consumer APUs.

They're fundamentally different.

Individual GPU cores are good at solving simple problems with no branching. 3D objects are made up of a ton of simple problems with no branching. The problem is these simple problems have to be done very quickly or the rendering wont be "real-time". Since the problems are simple and do not depend on one another it makes sense to use many cores clocked at moderate speeds to handle these problems vs a few cores clocked at really high speeds. Basically just throw more hardware at the problem, it's extremely easily to parallelize this.

CPU cores are a lot more complex. The typical application will have branching tasks and many tasks will have complex inter-dependencies between other tasks that are executing at or around the same time. There's no easy way to just throw more hardware at these problems, they just have to be done sequentially but as fast as possible. So more emphasis is put on a few cores that are clocked high and have a relatively high amount of cache

Because OEMs won't buy 150W CPUs and put them in normalfag boxes because they require decent motherboards and decent power supplies, which they can't skimp on and increase cost.

150+W CPU coolers need to be loud, or fucking huge. Since you want to fit in as many cases with varying clearance as possible, it usually ends up being the former.

Baseline ATX dimension guarantees aren't that big IIRC.

>How come GPU's have so many cores, but not CPU's?

It's mostly because GPU "cores" are marketing lies.

Something like AMD's GCN has 2-4 higher-level Shading Engines, that each has some fixed function blocks for geometry setup, pixel outputs (ROPs), etc., and some Compute Unit blocks. Each CU has 4, 16-wide SIMD ALUs and a scheduler that juggles threads on them.

So a 2304 "core" RX 480 really has 36 compute blocks that sort of are comparable to CPU cores, but these are really just more like slaves to the higher level SEs or ACEs/GCP.

CPUs don't count individual SIMD columns in their SSE/AVX units or arithmetic execution units in their integer ALUs and would have core counts an order of magnitude higher if they did.

still remember them as pixel pipelines

good post tho. I don't think the ROPs/CU or stream processors are a marketing lie though, they usually do say something about a videocard (even though newer generations can have fewer but outperform the older generation).

wasn't haswell before broadwell?

That's why we have ivy>haswell>broadwell>skylake
gains charts

Yes and no

We have to push gpu power to 'this shit would barely pass for an apu" at some point, and seeing as the cheapest amd currently is about that same power, its really just an add in card for really old pcs, not what they consider gaming quality.

But if they get their tech out there for everyone, the market share improves drastically, makeing their bigger gpus prosper more then if there was nothing at all.

L4 is obscenely expensive to manufacture. That little slab probably cost them in the ballpark of 150-200$ to make and there's absolutely no way to integrate it fully at that cost which would see the real performance gains.

so what does the L4 cache do
explain it to me like i'm retarded

It acts like a smaller memory pool between the CPU L3 cache and the main system memory. For the iGPU, it is the "local" framebuffer where it does the bulk of its transactions, as despite its smaller size it has lower latency and higher bandwidth than main memory.
For the CPU side, it simply acts as another cache layer between the memory and L3, and for programs that dont fit in L3 but can fit within the 128MB buffer can provide a boost from not having to go all the way out to main memory so often.

meant for the IGP but exposed as an L4 cache so the CPU got to use it too

And then AMD includes HBM2