Are you aware that the bug will not affect gaming, which is 90% of the market for Intel®?

Are you aware that the bug will not affect gaming, which is 90% of the market for Intel®?

Other urls found in this thread:

phoronix.com/scan.php?page=news_item&px=x86-PTI-Initial-Gaming-Tests
phoronix.com/scan.php?page=article&item=linux-415-x86pti&num=2
twitter.com/NSFWRedditImage

But that's not true at all.

because we all know games dont use batches on kernel right? RIGHT?

>what is syscalls
why are brazilian favela niggers so desperate to shill intel despite the reality of this horrible fuck up?

I am aware the bug will affect workstation and data centers, which is 95% Intel, now looking at switching to AMD.

>that graph for 2fps difference
Lmao

>will not affect gaming, which is 90% of the market for Intel®?
Servers?

>90% of the market

But it will affect gaming performance sweetie, virtualization workloads will be hit hardest since the virtual memory abstraction slow down from this patch is occurred by the host AND the guest.

But this is going to slow down the Operating Systems Virtual Address Translation as a whole by a considerable amount, affecting all workloads.

It's customary, apparently.

>this is what incucks with buyers remorse actually belive

Enjoy your 30-50% performance hit.

with the patches coming to the Linux and NT kernels, the difference is going to be 1fps

>142 vs 144 fps
>intel marketing

Intel deserves to go out of business.

...

To work around this intel bug, every time there's a context switch from user to kernel code the CPU CACHE WILL HAVE TO FLUSH! A massive loss of performance.

>which is 90% of the market for Intel®
Wrong. But the reality is, they have already sold defective CPUs and won't be bothered by it. Everybody will forget about it by the time intel releases new gen.

First off the Intel server market is huge you fucking retard. Every corporate data center runs Intel as their x86 workload cruncher.
Second of all you have no fucking idea how bad those context switches will be for applications that swim in arrays needing cache hits for performance like games.

Games are barely affected. There are exactly two places where games need to do syscalls. Taking input and handing off data to the gpu.
Everything else you said is correct.

You're thinking consumer... this is an issue for those in the enterprise and for cloud vendors. Server/Enterprise market is where the real profits are. You fanboi gaymers don't mean much to the bottom line.

maybe tetris on on the commodore 64 lad, but you dont think either a complex game engine nor the GPU drivers themselves are going to make syscalls?

>the understanding of gaymen

Go back to Sup Forums kid.

Bugs in nVidia hardware too.

It's something you do once per frame. The rest is probably just a bit of synchronization primitives and that's it.
The benchmarks with the highest impact are all IO related.

fixed that for you, OP

>cache flush at least once per frame
>acceptable

Please redpill me on this bug

You are going to flush your cache many more times just by operating on different sets of data.

It is only by a few FPS, that bar makes it look like it is a great number, but it is 2 fps difference.

>damage control

i.. it's gonna be ok

Don't be an idiot.
phoronix.com/scan.php?page=news_item&px=x86-PTI-Initial-Gaming-Tests

A "frame" literally has nothing to do with it, other than the clear lack of them after this patch is live.

Your frame is comprised of 10s of thousands of draw calls, which are feed from the CPU into a gpu buffer, each draw call requires the CPU to swtich to a system context to issue a draw call comprised of gpu reserved memory addresses in kernel space.

The price is paid every context switch and games are clearly heavily affected by this.

It's literally nothing, it will be fixed in Ice Lake. Sup Forums just saw the CEO selling his stocks as some apocalypse scenario and hyped up this issue which like you said is just a bug that people have already made patches for.

I have skylake and wont be able to afford a new one for a while, how will this affect regular consumers like me? Am I fucked performance wise or security wise or something?

>yesss just buy goyim-lake $1999 it's a bargain

>10s of thousands of draw calls
That's quite literally nothing in the context of this bug.

I'm an AMD fanboi but even I know you'll be fine. This mainly impacts enterprise/datacenter workloads. For consumers/gamers, its not going to impact you.

As I've understood, Intel fucked up by not making any security checks for instructions fetched from guessed (from a branch instruction) address in the CPU pipeline. That means that once on a full Moon that happens thousands of times per second, an user-level process will be able to execute privileged instructions, including reading or changing kernel-specific data. For example, any shady malware website is able to install botnets on your PC using Javascript by executing really specific code. The current fix Linux guys are doing is moving the entire kernel address space that resides in every process address space away into its own address space, so every shitty system call will have to remap the address space (and it is the prime reason most OSes don't do that), which will take its toll on performance.

I wonder if this effect performance on SharePoint online services?

So you dont think that draw calls contain addresses that would be stored in CPU cache?

If my game is rendering a specific object comprised of 500 draw calls, storing addresses relevant to this particular object in CPU cache is clearly going to significantly the instruction execution performance involved in every draw call.

no fancy transitions for you, buddy! kiss your promotion goals for 2018 goodbye thanks to intel!

literally a shit meme
the fuck is anyone doing here

>once per frame

Please give us a detailed technical explanation to explain how this is a "shit meme"

Arse.

That's not how it works. The benchmarks have already been posted. There's literally no difference.

OP is partially right. There's no way the PTI patches won't be optional because of the performance hit. Gamers will disable PTI because it affects muh fps, even though it doesn't affect it by that much. The games will still work fine, but gamers will be hit by exploits that involve the Intel bug.

Games don't do a kernel-mode transition on every draw call. That would be ridiculously inefficient. They queue multiple draw calls into a command buffer and submit them to the kernel driver all at once, and as far as I know, modern games still try to keep their draw calls in the thousands rather than the tens-of-thousands. They don't need a transition to access GPU memory either, since it can be mapped to user space. Kernel mode transitions will still happen multiple times per-frame, but hopefully not that often.

Got to side with this guy: Only on the not how it works part.

This is one of those things that tons of you have no experience working at a low level in CPU architectures and are spouting off stupid or half truths.

Simply put, until we have seen the details of the issue (it's still embargoed by Intel and the University), we won't know how severe it will be.

It looks like it will be devastating for cloud hosts. It will definitely have an affect on normal users, but we don't know how much until we see the details and can assess what mitigations can be put in place that won't require cache flushes.

I think a 5-10% hit in gaming is to be expected, but that's merely an educated guess, I don't know the issue details to make a serious call.

tl;dr - Fucking wait and see, but Intel are in for a world of hurt. Going to be fun to watch this shit show.

To continue what I was saying, I think you'll find Linux is rarely CPU bound as the GPU drivers are so shit it takes the burden off the CPU. In Windows where there has been assloads of optimization and top end GPUs absolutely saturate Intel chips, I think here you'll see serious pain.

The other thing that the benchmarks haven't accounted for is frame latency. I expect that's where a lot of the pain will be felt. It will feel choppier even if the frame rate is high enough.

More interesting results are these:
phoronix.com/scan.php?page=article&item=linux-415-x86pti&num=2

If you've got an NVME SSD and do anything relatively file intensive, you're going to be fucking mad..

Except the majority of multiplayer games that use server/client connections.

How often do you think a packet is sent?

Then provide a technical explaination as to why "thats now how it works"

Those results maybe a big deal if the variance is due to unstable frametimes, which is clearly where you will see the negitive affects of inefficent instruction execution, but the benchmark is of one cpu/gpu/driver - its technically useless for analysis.

The implementation depends on the driver/ hardware, but the fact that draw commands a queued in a buffer before being sent to the gpu seems to me to be irrelevant, where is your command buffer?

Seems logical to store that in l3 cache, but thats cleared when you switch contexts?

Nobody here is claiming to be an expert, its useful to argue right or wrong purely to learn more about a subject.

Yeah, the objection isn't the formation of an incorrect argument, but rather to the assertion of incorrect facts.

Definitely in favour of debate. Not in favour of misinformation.

If you come to Sup Forums for objective truths or accurate information then you should reevaluate your life.

>l3 cache, but thats cleared when you switch contexts
It's not. Even if it were it wouldn't make much of a difference in this particular case.

I'm not saying there won't be an issue, I was just responding to someone who seemed to think there was a syscall on every draw call, which is not true. I do think the number of syscalls is relevant though. If you really dropped your TLB and your L3 cache on every draw call, almost every access to the command buffer would be uncached, but if you could write a whole command buffer before the caches were dropped, it wouldn't matter as much.

Also, is the L3 cache really dropped when the page table changes? Actual question because I don't know this shit and I thought it was just the TLB that was lost. Apparently some caches use virtual memory?

I really don't.

That said, if my years in the industry can help people learn, then I would like to do that. Also, helps me refine my skills in ways.

Could you actually explain in some detail, im sure you have some useful thinking but that post is useless.

My original post was about syscalls on draw calls was a layman explanation of how this bug is actually impacting performance on a general workload, not a specific example of how gpu rendering works.

I think the main problem this patch is going to introduce is simply jitter in the processing pipeline, even if a pure gaming workload is less affected, many other OS processes are, if mysql is running in the background and every instruction has a 30% pentalty because cache clearing, that has a runoff effect onto all other processes.

There are now two sets of page tables. A complete one for the kernel and a minimal one for each process. Context switching is now more expensive because you need to switch between those two sets.
That switch also trashes the TLB cache which basically caches page tables to speed up the translation between virtual and physical memory addresses so that also impact performance.
On Haswell and later a hardware feature called PCID lets you avoid trashing that cache so the impact is not as large.
General purpose caches that store bits of instructions and data are not cleared.

Games aren't affected because they do relatively little context switching.

>Most of the internet consists of servers pulling lots of files at once
>Bug effects draw calls mostly
>Many multi player games run on servers
>It won't affect games

Read the fucking thread, retard.

Yes it does.
Damage control is strong with fanboys right now.

I already disabled updates

Wtf is this marketing, is this what marketing chads get payed for?

> can read cached data, magically
> malware which already infected my computer can now read kernel memory. can't write to it.

seems redundant to me

>people posting linux benchmarks expecting the windows patch to be exactly the same

are you people retarded? Just because Linux doesn't seem to be affected doesn't mean that Windows won't be.

>create a server on aws
>read someone's private keys

>tfw I bought a laptop with an intel processor and a NVME SSD for thousands of dollars last year

is intel shills the new ishills?

>execute javascript
>you now have root

the computational complexity of javascript being interpreted precludes this. also you're getting random-ass cache data, one byte at a time. under ideal 100% capture rate, like who is even capable of making sense of that data when it changes all the time

I'm sure you know better than the people pulling all-nighters to get this merged into the kernel.

people said the same thing about padding-oracle attacks against SSL/TLS until someone went and wrote a proof-of-concept that showed it was practical to exploit.

I don't understand, does EVERY Intel CPU in existence have this bug? Even older models?

Everything back to Nehalem at least. Older CPUs are likely affected too, but as far as I know nobody's bothered to test them. Older meaning not just Core 2, but possibly Netburst and P6, too.

Once the patches are out and the full details are public I'm sure there'll be people from /retro/ hopping on their Pentium 2s to find out just how far back it goes.

...

Good bait mate

yes

the best part is that morons make purchase decisions based on these graphs

OP got cucked in the most simple, non-aggresive, way possible
fpbp

>m-mom look the blue line is longer and therefore much better

20-60 times a second

95% of cloud and server infrastructure are based on Intel chips. Yeah. biggest market, yeah, gaming...