Linus: Ryzen only has 24 lanes total pcie and 2400mhz ram

youtu.be/96S9YXXexu8

Ryzen will have problems with 4k display. Even their highest end chips. How pathetic, this coupled with the poor gaming benchmarks and the fact that it costs more than the Intel equivalent makes it a dud again. So sad my friends, Intel wins again for all of your gaming and high end content producing needs!

Other urls found in this thread:

youtube.com/watch?v=JaTvSI0K_vA
asus.com/us/Motherboards/ROG-CROSSHAIR-VI-HERO/
forum-3dcenter.org/vbulletin/showpost.php?p=11059897&postcount=2261
forum-3dcenter.org/vbulletin/showpost.php?p=11209020&postcount=2402
techpowerup.com/reviews/AMD/R9_Fury_X_PCI-Express_Scaling/
pugetsystems.com/labs/articles/Impact-of-PCI-E-Speed-on-Gaming-Performance-518/
pugetsystems.com/labs/articles/Titan-X-Performance-PCI-E-3-0-x8-vs-x16-851/
forum-3dcenter.org/vbulletin/showpost.php?p=11227774&postcount=2435
guru3d.com/articles-pages/pci-express-scaling-game-performance-analysis-review,4.html
forum-3dcenter.org/vbulletin/showpost.php?p=10903162&postcount=2085
amazon.co.uk/gp/product/B06X9F3FKP/ref=oh_aui_detailpage_o01_s01?ie=UTF8&psc=1
amazon.co.uk/Corsair-CMK32GX4M4B3200C16-Vengeance-Performance-Desktop/dp/B017NW5RW8/ref=sr_1_3?s=computers&ie=UTF8&qid=1488157524&sr=1-3&keywords=DDR4 3200 MHz
twitter.com/AnonBabble

meanwhile IRL "gamer" children won't notice a difference whatsoever
meanwhile half of sales are generated by shilling and marketing (see also: Sup Forums right now)

24 PCI-E 3.0 lanes is enough to feed any two GPUs and two PCI-E SSDs. If you want more than that the 16 core workstation Ryzen variants will have 48, and the 32 core server variants will have 96.

This is not exactly true, I mean, the CPU has the support for this stuff, but motherboards manufacturers are autistic and, at least in the case of mATX boards, it's hard to find ones with x8+x8+x4 PCIe slots.
I have to use two GPUs (one for Linux, one for KVM GPU pass) and a SFP+ 10Gb card, and I have a mATX case, so I had to choose Intel because of the integrated graphics (I have to buy just a single GPU), the amount of PCIe lanes, and Thunderbolt 3
Sorry AMD, I wanted to give you my money, but you didn't give me upgradability

what's up with all the Linus postings lately
now someone is gonna post the AMD lady

Moving the goalposts again

youtube.com/watch?v=JaTvSI0K_vA
>R7 1800X review kits came with DDR3 3000Mhz RAM
What did AMD mean by this?
>Trusting a guy that's proven time and time again his knowledge of computing is only one step above normie tier

The kit is made of two modules of RAM, the issue only exists when you're using four modules

The kit had 2x8 3000 mhz DDR4 memory. It literally says on the box of the ram.

What are you talking about?

OP that's odd. The board I'm planning to buy says DDR4-3200+ in the specs.

asus.com/us/Motherboards/ROG-CROSSHAIR-VI-HERO/

Where are you getting your info from?

Gaymers hate Linux because this platform doesn't allow retards.

oh look, an intelectually superior professional

a man who doesn't waste his time with childish things

Aren't they have more PCI lanes than intel i5-i7 7700k? Just asking.

Yes, but Intel chipset have 20+ 3.0 lanes, AMD's ones have 6-8 2.0

Is this limited by the chipset (x370) or the MOBO manufacturer?

Could we see an upgrade to the chipset in 2 years without needing to replace the am4 platform?

And do I seriously need 8 more PCI express lanes if I'm just running one Titan XP?

>24 PCI-E 3.0 lanes is enough to feed any two GPUs
No it's not, SLI can get seriously bandwidth starved at 4K with TAA. It happens in quite a few modern games, DOOM for instance doesn't scale at all on x8/x8 SLI.

It's quite disappointing to me that they don't have at least 32 lanes, not even on their $500 flagship. That may be half the price of Intel's ridiculous shit, but it's still a fair chunk of change for a CPU and it's inadequate for high-end gaming as it stands. Good think PCIe switches can actually help in this specific situation, because SLI requires bandwidth from one card to the other and not the CPU, so a mobo with a PLX switch should work just fine, if anyone actually makes one of those.

you are comparing 3.0 and 2.0, cunt

And yet Ryzen's 2400MHz RAM has a higher throughput than Intel with 2600

You can still play games and not be an idiot, Gilligan.

If you weren't such an idiot gamer you would know that.

There is literally no fucking difference between 16 lanes for each GPU and 8. Not even PCIe 2.0 was saturated let alone 3.0 now. You don't know what the fuck you're talking about.

Out of curiosity, how many pcie lanes does kaby lake have?

>trusting linus jew tips
>watching linus jew tips

>>>/retarditt/

24

>DDR3

Stop lying, you're embarassing.

If you pull it out of your ass, it must clearly be true
forum-3dcenter.org/vbulletin/showpost.php?p=11059897&postcount=2261
forum-3dcenter.org/vbulletin/showpost.php?p=11209020&postcount=2402

16 lanes from the CPU, you can easily look this up on ark, it's specified there. 6800k has 28 and 6850k+ all have 40.

Daily reminder he wears sandals with socks

techpowerup.com/reviews/AMD/R9_Fury_X_PCI-Express_Scaling/

pugetsystems.com/labs/articles/Impact-of-PCI-E-Speed-on-Gaming-Performance-518/

pugetsystems.com/labs/articles/Titan-X-Performance-PCI-E-3-0-x8-vs-x16-851/

Oh my.

And a 6800k also only has 28 PCIE lanes. The x99 chipset provides 8 PCIE 2 lanes independently of the CPU. Not sure if the Z chipsets provide any lanes, while x370 provides 8x PCIE 2, B350 6x, and A320 4x

Those are comparisons of SLI on/off, and the difference between the different SLI bridges. Nvidia gpus do not communicate over the PCIE bus like AMD gpus do. I'd really like to know what is so special about the high bandwidth bridge that gives so much better performance than simply using 2 flex bridges.

Nvidia claims it's because it uses an actual PCB instead of flex cable. But there's something else going on where the GPUs recognize the HBB and use both SLI connectors to communicate, where there's no difference (and sometimes negative scaling) between two one and two flex bridges.

Can you please read I had the same discussion with you last week and seen another of your posts recently and here you are samefagging the virtues of PCIE 3.0 at x16 vs 8x using your TXAA'd DOOM, it seems you're stuck on some loop and if you're posting some german forum on that's obsessing on some tech for the wrong reasons, at least be correct your assumptions please. As discussed previously AA isn't communicated over the SLI bridge.

OP do you have more sources because a link of lincucks is too light

Those are both me, I posted right as the thread updated and I seen the idiot proposing that there will be an issue with PCIE bus bandwidth using incorrect sources. So I replied.

still only ran at 2400mhz

damage control

None of those games use temporal AA, they aren't the issue. Even so Heaven at 4K surround is ~40FPS slower over x8/x8 compared to x16/x16, literally in the article you posted.

Please actually look at the charts carefully. Here, I have gone through the effort to make it obvious to you for TW3, you can look at the rest for yourself.

Also: forum-3dcenter.org/vbulletin/showpost.php?p=11227774&postcount=2435
>once I turn AA on the PLX system comes 10FPS ahead, that's ~15% faster and SLI scaling improves from 62% to 86%.


You haven't discussed this with me because I haven't posted this before. DOOM also doesn't use TXAA, TXAA is some NVIDIA-specific AA algorithm that also includes a temporal element. You're also pretty ignorant if you think SLI only takes 3GB/s that a HB bridge can provide.

>pugetsystems.com/labs/articles/Titan-X-Performance-PCI-E-3-0-x8-vs-x16-851/
>x8/x8 configuration performs better in some situations
But overall the only time it made a difference was in 11520x2160 resolution on the lowest graphical settings, and you can output 11520x2160 on x8/x8 lanes at 60fps without issue.

I will probably not be running anything at that resolution/framerate for a very long time.

>Ryzen will have problems with 4k display
Erm, what? It's a CPU. You might want to look into what those do.
>poor gaming benchmarks
Have you got access to something we don't? Benchmarks so far have been pretty good.

3/10 shill, would not hire.

So this is some next level intel shilling disinfo, this is all according to one forum post. So "it is recommended" (by some forum poster) 4 max at 2400mhz or 2 at 3200mhz
Maybe it's some XMP profile? This could be fixed via UEFI/BIOS update as modules need to be usually approved and tested?

Next, PCIe lanes isn't mentioed in the video and "problems with 4k" display isn't mentioned (nothing to do with RAM)

Either way where do you see

Ryzen is an SoC, and the on die south bridge provides 24 usable PCI-E 3.0 lanes. The mobo chipsets add more. An X370 system isn't wanting for anything. They have shitloads of SATA, USB, NVMe, and GP PCI-E lanes.

Ryzen's IMC is not limited to 2400mhz DDR4. AMD will have partners releasing Ryzen branded DDR4 kits ranging from 3200mhz to 3600mhz kits. The only hitch is that some boards need a microcode update to get high speed DIMMs stable if you have 4 installed. Gigabyte already has their fix posted on their site.
Sandra memory benchmarks show incredibly high throughput compared to theoretical max. It looks like AMD's memory controller is now much better than intel's.

Reminder that intel shills are a real thing.

Kill yourself if you don't think the CPU is relevant to rendering speed. Where do you think the GPU gets information from?

>Reminder that intel shills are a real thing.
Doubtful. Friendly reminder that Intel rigged the market for the past ten years and that Stockholm syndrome can make people do stupid things.

It's not outside the realm of possibility that a company using dishonest methods to compete such as Intel would do it, though.

Do you know of any benchmarks that directly measure bus utilization to show that it's a bandwidth issue?

( like here guru3d.com/articles-pages/pci-express-scaling-game-performance-analysis-review,4.html )

How do you get rendering speed from "problems with 4k display"?

Also, even if we're to assume OP means in games, higher resolutions in games mean that the GPU is taxed more. That's why CPU benchmarks usually run at low resolutions, to remove GPU bottlenecks. We went over this when AMD tried to convince people that the FX series was a good buy with 4k gaming benchmarks. The GPU becomes the bottleneck there.

What is the best option for rendering and previewing in Adobe's retarded software? Intel or AMD?

I'm legitimately wondering if Linux knows he has to turn on the XMP profile in the BIOS

I'm considering a very similar build with an e3 v5. Do you regret the mATX case?

Some games do benefit from more PCI-E lanes.

forum-3dcenter.org/vbulletin/showpost.php?p=10903162&postcount=2085

>only supports up to 2400mhz
>motherboards out there that support up to 4000mhz

Okay, Intel. Have fun with your attempts at damage control. Remember to sage reply shill threads if you must reply at all

Didn't Asus release a statement where the RAM problem can be fixed with some sort of microcode update? This is literally nothing

Nope, I don't think I've ever seen a benchmark directly measuring PCIe bus load, just benchmarks measuring the side effects (FPS difference) of increasing or decreasing PCIe bandwidth. This thing flies completely under the radar, most reviewers don't seem to look at it at all and a lot of high-end SLI reviews use $1000 CPUs with 40 lanes anyway, so it's a non-issue there.

No, but you can write your own using Nvidia APIs, The functionality is out there.
Pretty much any OC software like MSI AB or ASUS, eVGA etc all use Nvidia API to do their stuff.

There is some Steam workshop modder than made an overlay like Rivatuner but and added PCIE Bus usage. I don't remember which game was it though I think it was a Final Fantasy like or some sort mmo maybe MSI AB Rivatuner can do it already?

Reminder that XPOINT is going to DESTROY SSDs, and oh will you look at that. Intel. Only.

If pic related isn't you, then discussing DOOM, SLI and some form of AA is in vogue either way get over your 8x8x sli vs 16x16x ffs.

They support up to 2666 natively, and look at this budget friendly b350 motherboard that supports up to 3200mhz with overclocking.

That test where there was a 40FPS difference was on the lowest posible settings on a 4k surround setup to get as much FPS, and stress the lanes / cpu. It's the only time there was any significant difference. In fact, 8x/8x at times performed better than 16x/16x. The titan X pascal is also significantly more powerful than a 1080, and will undoubtedly be making more draw calls with a greater strain on the PCIE bus than a 1080. Just saying. When running heaven in 1080p with max settings, all three configurations were well in excess of 170 FPS with 8x/8x coming out on top. There are some situations where having a full 16x/16x is beneficial, and even significantly so. But the majority of the time that is not the case.

Also note in your sources where temporal AA is used with a single card has almost no performance impact whatsoever. This is an issue on Nvidia's side and how they're managing that particular form of AA in SLI configurations.

Q2
Optane
My
Body
Is
Ready

>Linus

Sage and hide

Intel will look for ammo in every slightest out of proportion methods without even weighing in the overall, you can't underestimate the energy of despair.

If this ever happened to Intel or Nvidia, they would sue Lincucks to oblivion at such a critical time but this shows again that AMD just focuses on what matters. Truth be told that most people want reviewers to think for them, the "fake information" gets parroted over and it becomes news whn it's not.

>XPOINT is going to DESTROY SSDs,
Still not going to be significant in any way. We're well beyond the point of diminishing returns.
>Intel. Only.
For the RAM version, yeah. Then again, that means limiting your RAM bandwidth.

Do you honestly think that the rest of the market isn't also working on next gen memory interface? NVME PCIe 4x memory is common in the new boards of both AMD and Intel now. PCIe. NVME PCIe 8x is already in the works. Optane is just 16x, Intel is jumping ahead--it'll be Intel only for a half year a best.

>unrelated conversation on SSD technology

splitting edges and strawman

so will this a 1800x work with

amazon.co.uk/gp/product/B06X9F3FKP/ref=oh_aui_detailpage_o01_s01?ie=UTF8&psc=1

and this

amazon.co.uk/Corsair-CMK32GX4M4B3200C16-Vengeance-Performance-Desktop/dp/B017NW5RW8/ref=sr_1_3?s=computers&ie=UTF8&qid=1488157524&sr=1-3&keywords=DDR4 3200 MHz

375GB Optaint drive pulls up to 14w.
The tech is a dead end failure with no future. Higher capacity drives are going to depend on process shrinks to bring power down, and that isn't sustainable. Intel promised cheap high capacity drives and they utterly failed on that front in every conceivable way.
They also promised high speeds, and compared to current NVMe SSDs they don't have an advantage in anything but IOPS. SSDs will keep improving, Floptane however will not. Phase change memory is too power hungry. It will never scale like NAND.

yes

den y ppl angry

because alot of cash is in play and companies employ people to keep shareholders happy

Afterburner already has the ability to monitor bus usage, actually.

No, that's not me. You know that acting like a retard and ignoring factual evidence doesn't actually change the truth though, right? Why are you so adamant about ignoring factual evidence and benchmarks? Why the fuck are you taking a discussion about SLI performance so personally?

>That test where there was a 40FPS difference was on the lowest posible settings on a 4k surround setup to get as much FPS, and stress the lanes / cpu
Yes, but it very clearly shows that SLI does in fact depend on the PCIe bus, it completely invalidates the claim that SLI only transfers data over the SLI bridge, that was made ITT. In the single-GPU test the performance difference is ~1FPS, it increases to ~40FPS in SLI mode. SLI is clearly suffering much more from the lack of bandwidth compared to single-GPU.

>This is an issue on Nvidia's side and how they're managing that particular form of AA in SLI configurations.
Do you know what TAA does, in basic principle? It uses data from the previous frame to improve image quality in the current frame. SLI renders frames in AFR (*alternate* frame rendering). When GPU2 wants to render its frame, it needs to pull data from GPU1 if TAA is in use and vice versa. That's the problem, depending on the game, algorithm and resolution that may or may not hit a bandwidth wall.
(The issue isn't only with NVIDIA - AMD for instance recommends you completely disable AA in TW3 if you're using CF)

>linus
Kill yourself ... No one takes that faggot seriously.