'''Intel i9'''

>before, some 2 cores 4 threads cpu for ultrabooks are i7
>now, a 4 cores 4 threads cpu is a i7

Why this is allowed and at least the IPC will be better? because I want my gayming!

userbenchmark.com/UserRun/2730950

userbenchmark.com/UserRun/3602772

videocardz.com/69457/specifications-of-intels-core-x-i9-and-i7-series-supposedly-leaked

Other urls found in this thread:

cpu.userbenchmark.com/Compare/Intel-Skylake-X-12-Core-HEDT-12C-vs-Intel-Core-i7-7700K/m278103vs3647
cpu.userbenchmark.com/Compare/Intel-Skylake-X-10-Core-HEDT-10C-vs-Intel-Core-i7-7700K/m233971vs3647
wccftech.com/amd-ryzen-16-core-threadripper-whitehaven-4094-socket/
youtube.com/watch?v=nLRCK7RfbUg
youtube.com/watch?v=zmOglA32uRU
twitter.com/NSFWRedditVideo

>at least the IPC will be better?
Nope.
>because I want my gayming!
Better cling to those 4 cores like your life depends on it, then.

Also, those userbenchmarks results are basically bullshit, if you compare them to actually released CPUs and see how they should stand up.

Intels new 600W Prescott 2.0.

Intel, we are contributing to global warming, are you?

but in the two bench, only 1 of 8 DDR4 @ 2133 MHz slots was used. Probably engineer sample

>Also, those userbenchmarks results are basically bullshit
Why?

day one buy

cpu.userbenchmark.com/Compare/Intel-Skylake-X-12-Core-HEDT-12C-vs-Intel-Core-i7-7700K/m278103vs3647

The numbers here make no sense. 3 GHz turbo has same single core as an i7-7700K? Yeah, no.

cpu.userbenchmark.com/Compare/Intel-Skylake-X-10-Core-HEDT-10C-vs-Intel-Core-i7-7700K/m233971vs3647

And then you look at the results for the 10 core and they're completely different.

>intel releases new 3000$ Cpu

Wow! How can AMDoomed even compete? Literally on suicidie watch!

...

>anything above c4 is EXTREME
>most likely $500 minimum (excluding motherboard)
>obviously no upgrade path

>ryzen, slightly worse performance
>Upgrade path available
>Decent chip+motherboard goes for 300-400 depending on choice

A pity Ryzen are gimped on nvidia chips, has that mystery been solved by the way?

Who fucking cares it's just a name, are you a child

>want a volkswagen
>get a porsche
>fuck yeah! Who fucking cares it's just a name

(You)

Nice to see processors finally getting more I/O performance and PCIe lanes.

Intel's biggest problem here is going to be that AMD is going to be able to sell a 16c/32t 3.6ghz base 4.0ghz turbo CPU for less than what Intel is going to want to charge for the 7900k, and the 7900k and 7920k will simply get eaten alive by it in every respect except pure single core performance (but then why wouldn't you buy a 7740k if that's all you cared about?)

IIRC it has something to do with threading implementation in nVidia drivers. Something with threads being switched around between the core complexes and the resulting latencies gimping FPS.
Of course, that is to be expected with nVidia's "just duct tape it together somehow" approach to everything.

>PCIe lanes
AMD's HEDT platform will have more, 52+ PCIe lanes
wccftech.com/amd-ryzen-16-core-threadripper-whitehaven-4094-socket/
Nobody buys HEDT for single core.
Has any more testing been done with Ryzen CPU/NVidia GPU on one CCX?

What do you think of the results in this video?

youtube.com/watch?v=nLRCK7RfbUg
youtube.com/watch?v=zmOglA32uRU

Mostly. Nvidia's drivers simply don't split then work into enough threads, you can see this if you find any 1080 or 1080ti SLI tests on Ryzen CPUs where the additional threads from the second card give a huge boost to FPS in several titles.

Do the tests after programs have new AVX code paths like X265 encoding.

Clock to 5ghz *12 cores vs 4ghz *16 cores

A pity I built my pc about 6 months before Ryzen came out (I moved and needed to get myself setup) or I'd have jumped to them, even just the r7 line on the 1700 literally only reason I went intel this time round.

Also, yet to be seen is how much of a penalty the 7900x and 7920x will take from the reduced L3 cache compared to Broadwell-E.

>Of course, that is to be expected with nVidia's "just duct tape it together somehow" approach to everything.
As opposed to AMD's "oh hey it doesn't work, who gives a fuck lets just push new hardware where it still doesn't work!" approach with their DX11 drivers and OpenGL.

It fits the hypothesis. nVidia consistently performs worse with Ryzen when using DX12 where multi-threading actually matters.
They should fix their driver because DX12 and Vulkan are already here and they're not going away.

>$3000 12 core CPU
>$1500 10 core CPU
>$1000 8 core CPU
>$500 6 core CPU
>baby lake reloaded v2.0
>i7 without HT

Sadly SLI on Ryzen is a bad idea due to the limited PCIe bandwidth, it can't provide 16 lanes for 2 cards and unless I'm mistaken there are no Ryzen mobos with a PLX switch (which would do the job for inter-GPU communication). At 2160p where SLI is actually useful it can really choke on PCIe bandwidth and scale like shit or even perform worse than single-GPU. Not all games do this of course, but some are really picky. I just tried Quake Champions and I've seen like 60% PCIe bus load on x16, that's more than x8 can provide and the slower transfers will impact performance even if the game doesn't constantly hit the bandwidth cap.

Curious, as this benchmark seems to indicate that the better saturation of Ryzen's cores appears to overcome any PCIe bandwidth limitations.

ARE YOU READY FOR THE HOUSEFIRES?

That's too low res to get fucked. 1440p SLI is generally fine on x8/x8. Maybe some super-exceptions which don't work too well anyway would get hit, like DOOM which scales poorly even on x16/x16.

>No mention of anything TDP
Nice controlled (((leaks)))

Ah, I see what you mean. I'm only concerned with SLI at the moment as it's very useful for showing how much Nvidia's poorly threaded drivers may be holding Ryzen back, and bookends the argument made using AMD GPUs

>They should fix their driver because DX12 and Vulkan are already here and they're not going away.
There's another good reason for them to fix that shit, and that's Intel's upcoming Coffee Lake-S 6 cores.

Here's the TDPs. 160w 12 core. AMD's 16 core is 180w.

"i7" doesn't mean top performance.
It means top quality.

Power consumption (and thus less heat and better battery life) also plays a role in the naming scheme.

That didn't make alot of sense now did it?

Why doesn't nvidia make their own multicore processor to compete against AMD?

>T-T-T-TEGRA

Because they would also be competing against Intel

Not allowed to use the x86 license, that's why they focus on CUDA

For how long will that license/patent hold?

It sounds like its keeping alot of progress back from happening

The joke is that it's going to be worse for games than Ryzen.
And cost twice as much.

Yeah, the PCIe bandwidth constraints don't have anything to do with Ryzen in particular. Ryzen is just in the unfortunate spot where the CPU itself doesn't have sufficient lanes and there are also no mobos with PLX PCIe switches for Ryzen (AFAIK).

Intel's own 16 lane chips suffer from the same problem, but there are PLX mobos which solve it for SLI, because the cards need to talk to each other and not the CPU, as such the chip gets them full 16 lanes of inter-GPU bandwidth even if there aren't 32 lanes to the CPU. This would work fine on Ryzen too I imagine, if only somebody actually built a mobo like this.

4 ever I think

Yeah...no

...

>it's going to be worse for games than Ryzen
Nobody buys any HEDT platform for muh vidya gaymes.
>And cost twice as much.
This part is probably true.

>The joke is that it's going to be worse for games than Ryzen.
Yeah with 4.5 ghz out of the box and higher IPC than Ryzen I'm sure that's the case

Why is AMD including Ryzen so fucking bad at Wii U and PS2 emulation?

So, given that Kaby Lake is about 5-15% better per core clock-per-clock than Ryzen is and about 10% better at multithreaded workloads (i7-7700K and Ryzen 5 1500X at 4.0GHz), how is this a bad thing again?

The i9-7920X will be over 200% better than the Ryzen 7 1800X at the same clock speed, so it's worth twice as much
The i9-7900X will be 170% than the Ryzen 7 1800X
The i9-7820X will be 125% better than the Ryzen 7 1800X and overclock better (Skylake's clock jump compared to Broadwell is massive)
The i9-7800X will be about the same as the 1800X and 125% better than the 1600X
The i7-7740K will be better than all of the Ryzen series in any game, current or older.

What's the issue here again?

The issue is
>buuhuu it costs alot

Do gaymens blame jewtel for having even more cores than Ryzen now?

Yeah

Nvidia's drivers are shit and/or nvidia is intentionally fucking with AMD performance using their drivers

Nvidia has a bad habit of not actually rendering shit in a scene. This is always brushed off as a 'mistake', but it keeps happening over and over again. Nvidia's color palette is duller, Nvidia's driver stops rendering shadows, or it has awful texture pop-in or permanent low-res textures that people blame the developer for.

I'm at the point where I don't buy this bullshit being mistakes anymore. This shit looks intentional. They are intentionally gimping what they render for the sake of an FPS boost while hoping the average customer won't notice the loss in quality (they don't). They are intentionally gimping their hardware when it works with AMD CPU's. They use gameworks not to promote new technology, but to lock AMD out and force worse performance on AMD cards. They even tried to lock-in customers with an nvidia-tax on monitors so idiots are looking at double the price to move out of the nvidia 'ecosystem' - having to buy an entirely new monitor if they go AMD, their GPU drivers literally spy on their customers and sell the data to advertisers, and they even introduced a fucking paid-subscription model for driver features that I'm 100% positive they were going to lock shadowplay behind until AMD finally got off their ass and introduced ReLive.

Oh but wait OH MY GOD AMD PUT A THINGY ON MY DESKTOP JESUS CHRIST HOW CAN THEY DO THIS WHAT AN AWFUL COMPANY

I hate the tech reviewer shills that turn a blind eye, I hate the shill marketers who keep pushing nvidia's bullshit, and I hate nvidia itself for being an underhanded company that would rob their customers blind given a moment's chance.

tl;dr fuck nvidia

>meme the post

>4.5ghz
That's single core turbo. We already know the 1800x can hit 4.0ghz on all cores, while having significantly more L3 cache.

>Higher IPC
No. Ryzen has higher IPC, Intel's single core performance advantage is purely due to clock rates, not IPC.

Pay attention. Ryzen has pretty much the same/slightly higher IPC than intels' processors. It's just they can't get over 4Ghz, while Intel's go to like 4.6 GHz and beat it by a decent enough margin in single core applications.

If it's 4.5GHz out of the box, it's not going to cost twice as much, it's going to cost roughly cores/4 times as much.

Notice how the base clocks get lower as the number of cores get higher.
This isn't the case with Ryzen.

But it will be priced accordingly
The i9-7920X is easily worth over twice as much as the 1800X. $1200 is a fair price for an HEDT CPU that can do the workload of two 1800X combined. The i9-7900X is worth at least $1000 with the same metrics. The i9-7820X should be $200 more than the Ryzen 1800X because it's $200 more capable. The Ryzen 1800X can only really be compared to the lowly i9-7800X in terms of performance and price.

>No. Ryzen has higher IPC
Eeuh no? Ryzen has broadwell IPC

>Just bought a 7700K
It's not fucking fair!

From the calculation:
Ryzen performs like x at 4Ghz, i7-7700k performs x*1.1 at x*1.2 clockspeed.

Eeuh..no

>So, given that Kaby Lake is about 5-15% better per core clock-per-clock than Ryzen is
That's wrong. Go read Agner.

>and about 10% better at multithreaded workloads (i7-7700K and Ryzen 5 1500X at 4.0GHz), how is this a bad thing again?
That's also wrong. As a result the rest of your post can be safely discarded.

>euuh
hey I found a pic of you

>But it will be priced accordingly
>The i9-7920X is easily worth over twice as much as the 1800X. $1200 is a fair price for an HEDT CPU that can do the workload of two 1800X combined.

>Source: My shekelchasing fantasies.

>5ghz *12 cores
at 666W maybe

No, Ryzen has just under Haswell IPC for nearly all workloads except for AES encryption and SSE workloads.
>Go read Agner.
Wrong. He's using very specific benchmarks that takes advantage of Ryzen's strengths, not benchmarks that sees the whole picture. Ryzen is at best a Haswell equivalent.
>That's also wrong
It is not. Most multithreaded benchmarks show that the i7-7700K at stock is about 15% faster than a Ryzen 5 1500X at 4.0GHz. Since the 7700K is clocked faster, but has less than half the L3$, it's safe to assume that it's about 10% faster when all things are considered.

Try again, AMDrone

>Source: My ass and my shekelchasing fantasies.

Go through Agner's report on Ryzen's performance with a fine toothed comb. And while you're at it, learn how to shit on a toilet.

CEmu is alpha quality.

wait stop,
>i7-7640k
>4t/4c

What and why?

Go look up what IPC means, moron.

I think you're the one who needs to look what IPC actually means, idiot. IPC varies wildly based on what instruction set you're using/not using. If you have a CPU that offloads some of the workload onto a dedicated built-in ASIC, then the IPC will be significantly higher than if the CPU had to perform the same workload using brute strength.

Even if all of this bullshit is true (It isn't) AMD is coming out with their own HEDT platform with more cores, more performance and more PCIe lanes at a lower price.

The fact you're even trying to compare Intel's insanely overpriced HEDT CPUs to AMD's consumer CPUs shows just how fucked Intel is.

>"i7" doesn't mean top performance.
It means top quality.
Is that so. Then why did every i7 since ivy Bridge use jizz as Tim?

Heh, please delet this

Yeah..no

>top quality
>have to delid not to burn your house down

You're the guy who was mentioning 2160p doom choking on sli from before? Think you mentioned another game as well.

But if you're saying that not even 2x16lanes sort it out there must be something wrong going on in software for the kind of extra bandwidth to not alliviate it. There have been a few sli benchmarks floating around which are showing some pretty good gains for 2160p, I'm just curious why there are only a couple running into this problem. It could be the other wa where is 60x saturation on single card is being split, then it might be a lack of bandwidth to and from the cpu and gpu not only inter-gpu.

Is there any evidence of this happening on crossfire setups as well?

Upgrade path is the biggest meme in hardware. At most people might upgrade from an i5 to an i7 when they get a higher budget, or buy some shitty Pentium machine and pop something stronger in there. How many people actually care about how long a socket lasts? If you keep the same CPU for 4 years or so then by the time you need a new one you'll probably get a new mobo anyway for better PCI-e or DDR supports. Not to mention chipsets are sometimes incompatible anyway. Sockets only matter to people who upgrade CPU's every year or every two years, and those people are retarded anyway since the performance difference between CPU's of different years is so small, at least compared to GPU's.

Everyone and their mum was running Intel so that's what was "optimized" for.
I used quotes because those emulators aren't really optimized.

>You're the guy who was mentioning 2160p doom choking on sli from before? Think you mentioned another game as well.
Yeah. I also mentioned Quake Champions which I tried out today for the first time.

>there must be something wrong going on in software for the kind of extra bandwidth to not alliviate it
The core of the problem seems to come from modern temporal techniques combined with AFR. If GPU2 needs data from the previous frame, it doesn't actually have it since GPU1 rendered that frame. That means the data needs to be pulled from GPU1, this can only happen over the SLI bridge or PCIe. But the bridge is slow (like 3GB/s on a HB bridge IIRC) so most of it goes over PCIe (probably). The severity of the bottleneck I guess depends on how much data the game needs to pull from the previous frame and I assume how fast it can get it.

I know temporal AA is a cause of b/w bottlenecks, but I don't know what else causes them. The more you increase the resolution the larger the buffers that GPUs have to throw around grow too, at 2160p and above this sometimes becomes a problem.

I don't think excessive b/w to the CPU is needed - because it should show up in single-GPU as well then, but it's never a problem there beyond some very, very tiny drops in FPS. I don't mean to say this is the only reason SLI ever scales poorly, but it's a contributing factor. DOOM probably scales poorly for multiple other reasons too, but it uses temporal AA and does respond to faster PCIe.

>Is there any evidence of this happening on crossfire setups as well?
I don't think I've seen benchmarks, but I had 290X CF x8/x8 up until like a year ago. I didn't know about the b/w constraint, but now that I think about it I remember the AMD driver for Witcher 3 recommending to turn off AA - which is known to hurt scaling due to PCIe bandwidth in SLI, so I imagine it must be the same thing. The AA in TW3 is temporal AA too.

Housefires inbound

>Has any more testing been done with Ryzen CPU/NVidia GPU on one CCX?
What THE FUCK?
Do you seriously expect any reviewer to actually review and test something?!
ARE YOU INSANE?!

>benchmarks with ICC that has always gimped anything not Intel
wew lad nice try to not look like a shill
Agnes isn't testing with benchmarks, he's testing latencies for each single instruction the CPU can execute, nice way to look like an idiot though
Also, there's no compiler currently producing good executables for Zen, GCC's zenver1 it's basically a copy of bdver3 with a few patches on top, march=haswell it's producing faster executables, that's how bad it is
MSVC hasn't done significant updates either

Not that user, but what do you think about two GPUs on a single card connected with a fast proprietary bus? Wouldn't it solve the memory copy problem?

It would hypothetically help tremendously at least, the "ideal" solution would be for both GPUs to be able to access each other's memory at full speed (at least for reading) probably. But that doesn't exist, dual-GPU cards use a PCIe switch, which can also be found on your mobo if you're using 2 discrete cards.

Pic related, a Radeon Pro Duo with its on-board PLX PEX8747 PCIe switch. So those 2 GPUs still talk over PCIe, it's just on-card instead of going through the mobo.

>MFW Navi will literally be able to do this

NVIDIA has NVLink too, but I wonder if we're ever going to see anything like that on desktop parts anytime soon. AMD dropped the CF bridge entirely, I'd be somewhat surprised to see them come up with a super high-speed version of it now.

Isn't nvlink just some kind of well clocked pcie thing? But I think it was only gp100, gp102 didn't even have/use it.

Infinity fabric on zen and navi is probably using same kind of idea, but either it's using pcie or is just compatible. But there was hyper transport3+ mentioned in some of the slides though, was that just on die or was it interdie as well?

Navi is going to use MCM, as far as I'm aware.

explain what?
They get to hurt amd's performance without looking like the bad guys, push come to shove they can call amd shit because of the way they made their cores.

what reason would nvidia ever have, outside of a lawsuit, to make their shit work better on amd?

Does this imply 4 core ultrabooks or something? If yes I'm going to have to put off buying a new ultrabook for another year.

from my understanding

amd has a hardware scheduler, which is far more efficient than nvidia's and lower overhead, when used, beats out nvidia on equally performing cards, however, game devs are shit and can't be fucking asked to multithread.
Nvidia's solution to game devs are shit, is force multithreading though the driver as they use a software scheduler that is more flexible.

As for opengl, its cobbled together shit and amd's implementation of it is perfect.
Nvidia on the other hand hacked together the shit in such a way its barely opengl at this point, but it preforms better.

both of these issues are more outside issues are fucking amd rather than amd being shit.

there is no issue. AMD poorfags will avoid anything above 400$+ and enthusiast who want performance are going to buy i9 8 core and higher. Intel has higher clock speeds, higher IPC, better single and multithreaded performance compared to Ryzen with this.

16C from AMD is out of question since its 180W+, I don't even want to know what this shit needs during load.

So people who own ryzen computers are more likely to buy nvidia graphics cards over radeon. if the consumer is informed enough (which is more likely if they even considered ryzen let alone bought one) they can find that nvidia takes a performance hit on their system.

If this performance hit is actualy true and gains meida attention, it'll be a bad pr hit and nvidia will fix it before the pin drops. I'm suspicious about how little furor there is about it yet.

nvidia is near monopoly in the consumer space as even when amd is better people still buy nvidia, all that would happen here is 'oh, nvidia preforms worse on ryzen, looks like im getting intel'

>Isn't nvlink just some kind of well clocked pcie thing?
I'm not sure on the details or if it's any similar to PCIe on the protocol/interface level, but it's supposed to be much faster anyway and designed for GPU to CPU and GPU to GPU communication in HPC shit.

The thing is consumer graphics card will have to keep using PCIe since it's not going anywhere, so I doubt NVIDIA or AMD will do something as drastic as changing the card connection, especially when mGPU is so very niche and there are no b/w constraints otherwise. PCIe 4 is also coming.

Yeah I've read about it too, but I'm not sure how exactly AMD is going to use that for graphics cards or how it's going to work, or if they're doing it at all in the end.

if you have a 1600 or a 1700, why even use the shadowplay/relive? you get better quality out of obs.

>higher clock speeds, higher IPC, better single and multithreaded performance compared to Ryzen with this
Why would you lie all around?
They have lower clockspeeds, look at the base compared to Threadripper/Whitehaven
They have lower IPC according to Agner Fog
Single thread performance will be higher only on pure single threaded loads with Turbo Boost 3.0 enabled (which will make it even more of a housefire)
They will have lower MT performance thanks to the ring buses, and they don't even match Threadripper/Whitehaven in core count, this isn't even taking into account that the insane IPC performance in Zen gives huge gains with SMT
>16C from AMD is out of question since its 180W+, I don't even want to know what this shit needs during load.
There's AMD's 12c too, and this Intel chips will draw more power than AMD, the 7700K draws more under load than a 1800X in every scenario

Really makes you think, huh?