What workloads can saturate a PCIe x16 slot?

what workloads can saturate a PCIe x16 slot?

> PCIe v.2 8 GB/s (×16)
> PCIe v.3 15.75 GB/s (×16)
> PCIe v.4 31.51 GB/s (×16)

even modern GPUs playing AAA games don't use the full bandwidth of PCIe v.2, so what's the point of rolling out v.4 on desktops and laptops???

>pcie and graphics cards are only used for gaymes

>even modern GPUs playing AAA games don't use the full bandwidth of PCIe v.2
Actually, surprisingly, some now do.

Look at the Fury/FuryX - the 4GB of VRAM is limiting them enough in some modern games that they're constantly swapping out pages over that pci-e link - the faster it is, the better for those cards.
Of course the real solution isn't increase link width, it's just to increase vram so the swapping doesn't need to occur.

>so what's the point of rolling out v.4 on desktops and laptops???
Power savings actually, PCI-e 4.0 16x will be nice sure, but the real reason is because most things won't need anywhere near and can drop lanes as appropriate.

Rendering an image of your mother

does anyone knows how much bandwidth rendering takes? (blender, maya)

Are there any 31.51 GB/s SSDs out yet? :^)

See Pretty much the same situation - lots if your rendering a highly complex scene (with lots of textures! - they're the killer, meshes take up fuckall room) on a low VRAM card.
If you've got enough VRAM to cover all the assets required for the frame being rendered - then really bugger all bandwidth will be used after the VRAM is filled.

yes > /sys/devices/pci

Yes.

But you can't afford one.
>And if I've somehow mistaken you and you really can personally afford one, well holy fuck what are you doing on this place - laughing it up at poor people?

Wew lad
How would file transfer be over 10 gigabit connection between 2 computers

It would be done before my dick could even get hard, hnnng

kek

Apple would be the first to create them.

>...can drop lanes as appropriate

please explain

...

maybe one of these fuckers - for machine learning? is bandwidth from host to gpu a bottleneck in machine learning?

PCIe 4 will mostly be for servers and HPC, halving the power consumption, and high performance storage. For games it wont make a difference, maybe if you saturate your vram on textures a swap could hit the PCIe hard but every engine now a day has some form of streaming to mitigate this to the point that even PCIe 2 is not even saturated. The only area with respect to game PCIe 4 will offer an advantage is that you will no longer required a bridge for SLI.

If the device for example has a full 16x electrical connection, but at the current moment only requires operation of say 1x lanes - it can disable the other 15 lanes so save on power.

This isn't actually new, PCI-e 2.0 (and onward) can do this, (iirc 1.1 can too) - but the extra bandwidth per lane of 4.0 means that more devices will be able to settle at 1x lane.

This is why you fall for the 64GB RAM meme. Set up your computer with a with a UPS and a ramdisk that saves to nonvolatile storage whenever the UPS kicks in. Cheap 34GB/s SSD.

Most people would be better off not doing that - even with a UPS can't stop bitflips.

Now if you're talking a real computer with *Registered* ECC dram, then you're determinately on the right track.
>Plus it probably won't be a wimpy 34GB/s then either, being that most every platform currently supported rdimm is quad channel interleaved.

GP-GPU is the only workload i can think of (pic related)

gaming GPUs - no
network cards - no
SSDs - no

thx

How likely is that to happen anyway? Like if I store a random 1GB number in a file on a ramfs tonight and sha512sum it, how likely is it that one of the 8 billion bits will be flipped when I wake up and the hash will be changed?

more likely than you think. google some recent studies.

If it was that likely then all ram would be ecc.

The google study shows lower than I thought. 8% of dimms had a bit flip in a year. It doesn't seem to be random either. These errors are occuring in the same dimms over and over. If a DIMM experienced a bitflip last month, then it is 35-228 times more likely to have a bit flip this month.

I was expecting something like one bit flip per month per GB or something. I'm kind of sad that it isn't more random and more common. I wanted to use my ram as a particle detector.