Why does Intel waste so much space including processor graphics instead of using that space to make better CPUs?

Why does Intel waste so much space including processor graphics instead of using that space to make better CPUs?

It isn't like most people who buy i7 will ever use processor graphics.

Other urls found in this thread:

pugetsystems.com/labs/articles/Intel-CPUs-Xeon-E5-vs-Core-i7-634/
people.csail.mit.edu/emer/papers/2004.03.prdc.cache_scrub.pdf
en.wikipedia.org/wiki/ECC_memory
dinaburg.org/bitsquatting.html
twitter.com/NSFWRedditVideo

Oh look it's this shit thread again.
Fuck off OP, you already got answered the last 3 or 4 threads ago.

Side note, what is the point of Xeon E3's? They just look like the core lineup but some just don't have graphics

It does? All server CPU and high end CPU

>Why does Intel waste so much space including processor graphics

My CPU has no such waste of space...

This is the 5960x Die shot, they just disable 2 cores and caches for the 5930k and 5820k.

>better CPUs
they should just make gpus instead. they are much faster at computing.

...

i7 is not for pro

>better CPUs?
why do you need those? encryption? your not a TERRIST ARE YA?

???

Sorry, didn't mean to post that. It was my mistake.

lmfao, Saved

Thanks user, you scared me for a second there

My processor has no waste of die space either

Oh wow, now I understand AMD processors entirely. And the more cores meme.

pugetsystems.com/labs/articles/Intel-CPUs-Xeon-E5-vs-Core-i7-634/

There is literally no difference in the silicon between an e5 v3 xeon and an X99 5xxx series i7.

They have the same core design and everything, xeons just have larger core count with lower clock speed and locked multipliers.

They also add ECC support, vPro, and TXT. But those are all just enabled/disabled through microcode, the silicon is the same.

My PC can't handle all this wasted space..

I know. You can see from the images that the i7 in is literally just the left half of that Haswell Xeon die I posted.

Both dies are Haswell cores of identical design in different SKUs. The i7 being a Haswell LCC.

Because, reality check:

Intel makes most money from either corporate i3/i5 boxes that don't need a graphics card and i3-i7 ULV mobile CPUs where there is no room for an additional graphics chip or it's too expensive, too much heat, you name it

Triple-X K overclocking gamer memes are literally irrelevant

yeah, the kikes charge ~$1100 for an i7-6900K, but Summit Ridge will actually be a no-nonsense 8c as you probably already know.

Ecc. Also, as a general rule, if you don't know what it's for, it's not for you

>Also, as a general rule, if you don't know what it's for, it's not for you
Not him but this attitude seriously annoys me. What's wrong with somebody asking a question about something they want to learn more about?

you're probably one of those retards that think scientific advancement is a linear correlation with how much money you mindlessly throw at it, too

the iGPU isn't starving the rest of the CPU of "valuable" die space, and both improves efficiency and reduces overall system cost

>It isn't like most people who buy i7 will ever use processor graphics
gaymers aren't the only people who buy i7s, there are plenty of applications that benefit from tech like hyperthreading, virtualization enhancements, bigger caches and more cores that don't need a strong GPU to accomplish at all

workstations are almost always featured with a dedicated GPU of some kind anyway
while servers either don't have one at all or have a limp-dick dedicated chip of their own
makes plenty of sense when you really think about it

> they are much faster at computing.
only for embarrassingly parallel floating-point heavy applications
i.e. 5% of things you do at home if you're lucky

On the subject of Xeons vs i7s, can somebody explain to me in simple terms why Xeons have locked cores?

I mean, what economic incentive is there for Intel here? Do they want gamers to buy cheaper i7s instead of more expensive xeons, or what?

E3 zeons can be cheaper than i7's. Xeons are meant for absolute reliability, and thus have no reason to ensure that xeons can overclock, nor tempt buyers with potential energy savings with undervolting.

Y'all forgot to highlight the hardware NSA backdoors there.

Single-socket xeons and comparable Haswell-E/Broadwell-E i7s are usually priced within a few percent of each other.

The bigger difference is different clocked SKUs, done mostly for thermal design management and to prevent the i7 market from being flooded with too many models.

You can undervolt a xeon (at least E5) if you want to. It's kinda absurd though, since Xeons have really low voltages to begin with.

Serious question: Wouldn't that drastically reduce the cost of the CPU, though? I thought mm2 of silicon was a major factor of chip cost.

I was under the impression that virtually all of the cost of a chip went into R&D

A large chunk of it is R&D, with the next largest chunk going to the mask set required to actually make the chip.

Individual chips may cost maybe a couple bucks to make, but yields and needing to recoup the costs of actually getting to that point in the first place drive the prices up.

Also, investment cost for the lithography equipment

it would increase yields as you would have more chips produced per wafer. but it would probably cost more in the end as you would need an entire new production line just for the gpu-less die. intel right now just recycles defective chips into xeon e3, i5s and i3s. with disabled eus/cores/cache.

Actually I agree with you for the most part. But it kind of means that the differences between the xeon and i7 are very small and are not very interesting I guess.

For me it's mostly about ECC

i7s don't support ECC? How fucking jewish can Intel get?

pretty damn jewish

Funny thing though, the i3s support ECC - just not i5 or i7

>i7s don't support ECC?
Because that's Xeon territory. Neither do the i5.
The Celeron, Pentium, and i3 all support ECC because they are dual-core only. There hasn't been a dual core socketed Xeon in ages, so they are allowed to have that niche. Four cores?
Nope, buy your ass a $400 Xeon E3 or GTFO, goy.

>falling for consumer-class boy toys

Makes little sense, but it's true.

> i7-6900K - 8c*3.2GHz, no ECC: $1100
> E5-1660v4 - 8c*3.2GHz, ECC: $1100

> i7-6850K - 6c*3.6GHz, no ECC: $600
> E5-1650v4 - 6c*3.6 GHz, ECC: $600

I guess they had to gimp something in exchange for having unlocked clocks, or else the E5-1XXXv4 line would have floundered.

I think it's just a reliability thing. They probably don't want people buying Xeons for their reputation and then screwing it up by messing with the clocks or something.

That said, if you want ECC then you usually don't want to overclock, and vice versa - due to the counterintuitive goals.

(ECC = reliability is more important than performance, overclocking = performance is more important than reliability)

There are Xeon skylakes in the $2xx range.
E5, E7, an D models also support rdimms.

Dees fags and their inferior processor designs.

MOve over, time for perfection.

Fuck I hope Zen succeeds.

Even 8 cores at 3 GHz and 90% of Broadwell's IPC for $500 would be massively profitable for AMD and would completely fuck Intel's ability to command the obscene margins they do.

> plus I remember that AMD has included ECC in desktop chips since like the Athlon64 days

The main Power series is straight-out baller, but Cell was a janky piece of shit.

>a bunch of shitty half cores and a slow main core that has to do half of all the work
Na.

>plus I remember that AMD has included ECC in desktop chips since like the Athlon64 days
Not only that but all their FX desktop chips support:
ECC
IOMMU (VT-d)
AES-NI
and can be overclocked if desired.

With Intel you have 123 different combinations of the aforementioned features. It's just annoying.

ECC support is in pretty much every die aside from something designed to be ulta low power like a 4w~ mobile SoC. Its a few added bits on the PHY, and bone standard feature of any decent memory controller.
AMD's desktop chips are all derived from their enterprise hardware so they all technically support ECC. Though I don't recall any socket AM3+ boards supporting it so its entirely moot. Having support there means nothing if you can't ever use it.

my 5820k has IOMMU, AES-NI, and is overclockable.

It technically could do ECC but it's disabled by microcode. It's a bit of a moot point though since 98% of people will never need ECC RAM.

AMD desktop boards with ECC:
ASUS M5A78L-M (~ $50)
ASUS M5A97
ASUS M5A99X
ASUS TUF Sabertooth 990FX
ASUS Crosshair V Formula

fantastic, what would you say you need ECC for over normal RAM?

Protect myself against Rowhammer attacks from within the VMs that I run
Enjoy higher reliability

ZFS

Just out of curiosity, has anybody actually demonstrated a functional general-purpose Rowhammer attack?

It feels like something that would be virtually impossibly to successfully exploit.

On March 9, 2015, Google's Project Zero revealed two working privilege escalation exploits based on the row hammer effect, establishing its exploitable nature on the x86-64 architecture.

The second exploit revealed by Project Zero runs as an unprivileged Linux process on the x86-64 architecture, exploiting the row hammer effect to gain unrestricted access to all physical memory installed in a computer.

In July 2015, a group of security researchers published a paper that describes an architecture- and instruction-set-independent way for exploiting the row hammer effect. Instead of relying on the clflush instruction to perform cache flushes, this approach achieves uncached memory accesses by causing a very high rate of cache eviction using carefully selected memory access patterns.

The proof of concept for this approach is provided both as a native code implementation, and as a pure JavaScript implementation that runs on Firefox 39. The JavaScript implementation, called Rowhammer.

Does ECC really protect against it?

I thought ECC worked by having the chipset use idle time to scan the memory for bit corruptions, instead of actually being in the path of every lookup/request.

>Does ECC really protect against it?
Not 100% but it helps a lot because you need to flip 2 correct bits instead of just 1.
As far as I know there has been no proven row hammer exploit against ECC RAM.

ECC is checked live with every access

>ECC is checked live with every access
But is how my BIOS described it working with the default configuration. (There was also a setting to do it for every single access, but I think that's slower or something?)

The most common ECC setup is 72b/64b where the extra eight bits allows correcting one flipped bit and detecting that two bits were flipped (but not which ones), and any word of data read from memory is checked every time.

"Scrubbing" main memory is just the CPUs' memory controller performing reads over the entire physical memory space to guarantee a maximum time interval between checks for any given location.

The parity bits logic is basically a big XOR tree that takes virtually no time to generate (not that store latency matters) or validate.

The hit for scrubbing is bigger because you're wasting bus bandwidth, but benchmarks usually show the penalty for ECC being about 0.5%, so it's completely negligible.

>half cores
each SPE implements a full instruction set with SIMD capabilities what the fuck are you talking about?

Economies of scale

>It feels like something that would be virtually impossibly to successfully exploit.
Yeah especially since he's talking about from a VM. VMs tend to give access to blocks of data. If I'm reading this right each row has 1k column addresses that hold 8 bytes a piece. So if you have 2 2 rank DIMMs that's 32kB of vulnerable memory below your block and another 32kB above. Since that's nothing data size wise it seems like the way around it is to implement a neutral zone where you don't store anything but don't allow give the VM permssion for either.

If you have the ability to set memory you don't have permission to an arbitrary number then I don't see how ECC helps you. ECC is not designed to be used as a secure hash. The "collisions" are easily calculated. Furthermore the 8 bits of redundancy are stored just like the other 64. They are just as vulnerable to manipulation.

The complication added by ECC is that you have to guarantee you flip absolutely every bit in the target word the way you expect, including the ECC bits, in the time between scrub passes or random accesses, which could be a lot sooner than you expect if you're targeting a PTE or frequently accessed instructions, etc.

In a system without scrubbing, you don't have to worry about being caught half way through an attempt.

>all this wasted space

>instead of using that space to make better CPUs
They are using it to make better CPUs, have a look at their enthusiast lines.

>It isn't like most people who buy i7 will ever use processor graphics.
I'm using mine for my secondary screens, my graphics card doesn't idle at 1.2GHz this way. It's also useful when installing Linux, since most distros seem to outright crap out on Pascal.

I can't find anything definitive on scrub rates. This seems to suggest once a day leads to a 10000 MTTF
people.csail.mit.edu/emer/papers/2004.03.prdc.cache_scrub.pdf

As for random access getting caught on a ECC machine means it corrects the bit and you get to keep trying. On a non ECC system you have to get every bit the way you want it or risk a crash when you're caught.

It seems like a ~64kB neutral zone is the way to go here. If you need to protect against cosmic rays and rf noise, get ECC. This rowhammer thing is some tinfoil shit that works in a lab.

Does ECC memory have any use for desktop users? What do servers and shit utilize it for?

For desktop users, not much. Servers and enterprise users though, it corrects single bit errors and detects double bit errors.
Bit errors can come from anything from a faulty power supply, to a slightly flawed memory IC, to the memory controller acting weird. The biggest source of errors though is cosmic rays. One ray generally packs enough energy to shotgun a rain of relativistic subatomic particles down from the upper atmosphere, and one of these subatomic particles can hit a memory cell on a DRAM chip hard enough to flip the bit.
en.wikipedia.org/wiki/ECC_memory
Read the 4th paragraph in the Problem Background section for more relevant info.

Point is, its enough of an issue to mission critical systems to develop technologies that defend against it.

>Does ECC memory have any use for desktop users?
Avoiding that crash every 10 years when a cosmic ray hits your shit in just the right spot.

>What do servers and shit utilize it for?
Companies that run tons of servers have thousands of DIMMs and run them with uptimes in the months and years range. TBs of ram that is in use all the time is a lot of bits.

Thanks anons

it's also for catching so-called "hard" errors as components go bad.

it's estimated that more errors in the field are caused by memory chips that have an occasionally flaky cell or column/sense-amp, and doing a constant check for these strongly improves system reliability.

even further reliability can be achieve by chipkill-type systems, where the DIMMs are populated by enough narrow (4b wide x 18?) chips that a bad one can be isolated and the non-parity bits be guaranteed to be moved to one of the remaining chips.

is this a daily thread or something? I remember seeing this very same OP like a month or two ago...
is Sup Forums really this bad?

Humor this shitty example for a moment:
Some system at your bank has the binary value 110110011111000110110001 stored in RAM representing a sum of $14,283,185, relating to an account, a money transfer, whatever

Some cosmic ray or hardware failure happens to fuck up a bit in this memory location that isn't corrected, your new number is 11011001[0]111000110110001, or $14,250,417

You've just lost somebody around ~$33,000 with just one bit flip.

And G*d forbid the most significant bit flips instead, you'll lose nearly $9 million.

In a datacenter especially, the increase of failure points also increases the chances of this significantly, for this particular example, that's one of the many reasons banks still employ exotic hardware like mainframes that not only correct errors but also execute the same instructions multiple times and check the results to make sure they're consistent, a little error can make a big difference, even if it doesn't happen often.

Hell, even as a home user, you can still get fucked by bit flips, though often enough to justify the use of ECC memory:
dinaburg.org/bitsquatting.html

Its called the enthusiast platform.

it's a yield increasing technique.
they know that their GPU is so shit. so it wouldn't matter if it would non-functional.

>mfw my ancient Xeon X5650 @ 4.2 GHz matches (on average) an i7-6700K in multithreaded tasks
>mfw it doesn't bottleneck my GPU in any games I play

God I hope AMD's Zen is competitive and forces Intel off of this 5%-IPC-gain-every-year bullshit.

Intel isn't settling with those performance uplifts for lack of competition, its all their engineers can manage. Integer perf per clock is absurdly hard to continue to squeeze out.

according to the CPU-Z benchmark, my dual westmere Xeons are on par with a 6950X in multithreaded tasks and on par with a 3960X in single-threaded tasks. hardly any reason to upgrade from LGA1366

I am very surprised that a HD 4000 can play warframe at low settings.
Are there any igpu chips that can max crysis 3 yet?

Good question

I have a 6600k and it makes no sense that an OCable card would have integrated graphics.

I literally just bought a 6700k earlier today

did i make a mistake?

Of course not

Nah, if an i7-920 is still viable 8 years later I doubt an i7-6700K is going to go obsolete any time soon. It's all about GPUs and SSDs these days, CPUs just aren't as important in most cases.

There's no real point to low core count Xeons, excluding use cases where you want low clock-speed and access to ECC. (i.e. using a Intel Xeon E5-2603 V4 for a file server running nightly backups of surveillance or financials).

oops, meant E3-1240L V5. Tired.