Why is vsync allowed to exist and why is it automatically enabled in some gayms

why is vsync allowed to exist and why is it automatically enabled in some gayms

because screen tearing is fucking gay and games would be unplayable without vsync

Because how display devices developed over time.
We are fixing that slowly with a-sync.

i'd rather have screen tearing than half a second of input lag

Fuck off retard.
Tech illiterate casuals can go back to

I don't know about gaymes because i'm not an underage manchild,
however vsynch is relevant to my interests because my monitors are in portrait mode and tearing is visible as hell when scrolling in this format.

why doesnt the monitor just display the last fully rendered frame?

>half of second of input lag from vsync
things that never happen

>>hublahoo fuck off back to Sup Forums
>vsync off
>tearing
>vsync on
>no tearing
Also applies to scrolling, moving windows around, anything else that requires graphical elements, not just games you fucking mongoloid.

Because that would be worse, and monitors have no way to know if a frame is going to be fully rendered or not when it starts drawing it.

>monitors have no way to know is going to be fully rendered
suppose we have 2 buffers, gpu renders a frame in buffer 1, ticks a box rendered
it then renders the other frame in the other buffer and ticks the rendered box for buffer 2 and unticks buffer one
the monitor just looks which buffer has a ticked box and draws that image
would this work and if it does why arent we using it?

faggots in their .png world never saw what a woman looks like naked. and lossy..

Why don't monitors start rendering from the center outward?

>half a second

>setting your screen to 2Hz

Nigga what?

That is EXACTLY how vsync works.

It has something with how 3D image is rendered in real time.

why is it so SLOW then?
i feel a real noticable input lag with vsync
also my method doesnt limit fps somehow so i doubt thats how vsync works

Because if the monitor has a fixed refresh rate it will only look at those "checkboxes" once at the start of a cycle.
ie: if you have a 60Hz screen it will only check 60 times per second.

btw: why do you call it "input lag"?

It's clearly output lag.
It doesn't effect the game physics, it only shows you a delayed image.

Crazy trip!
What hash does Sup Forums use? Want to display a trip like that

Out/input depends on the perspective

i know, but with vsync off and 200fps on a 60hz screen i dont get any noticable input lag
the monitor just draws whats in the buffer
i really doubt looking whether a box is ticked every 1/60th of a second makes that much of a delay
i've never heard of output lag
might just be because 99% of people(including me) are just spewing shit around

>i really doubt looking whether a box is ticked every 1/60th of a second makes that much of a delay

It doesn't.

The "delay" is simply the amount of time your screen takes to draw a frame:
1/60s in your example.

Or more precise, at 200fps and 60Hz:
- the top of your screen will show a frame that is at most 1/200s old, regardless of vsync.
- the bottom of your screen will also show a frame that is at most 1/200s old if you have no vsync, but with vsync it will be between 1/60s and (1/60 + 1/200) old.

Monitors weren't that smart, the GPU just started sending out the buffer with a tick on it, regardless if the screen was half way through a draw, nearly done, or just starting. You really need two buffers on the MONITOR. GPU sends frame, goes onto buffer one, monitor starts drawing. Gpu sends another frame, buffer two. Monitor, once done with the first frame, draws from buffer two. Gpu sends to buffer one, monitor not finished drawing, gpu has another frame ready, draws to buffer two. LCD ready to draw again, skipps frame in buffer one, draws what's in buffer two.

Screenshot of snapchats arre in png retardo.

Why OP sees a delay:
The screen starts drawing a frame, has one in it's buffer, the gpu is rendering the next, and you push a button.
First, the current frame finishes drawing, and starts drawing what's in buffer two, meanwhile the GPU has finished rendering, and has put a frame in buffer three. Now the GPU starts rendering your button press. The screen finishes it's frame, and starts drawing what's in buffer three. Now your GPU has finished rendering, and puts the frame in buffer four. The screen finishes what's in buffer three, and starts drawing what's in buffer four.

OP sees a delay because there is a delay, if engenners ware superior race, they would treat pixels like a memory stack which is changed without this ... unpleassant effects.

The screen isn't smart but the GPU is.

The GPU knows exactly when the screen is done drawing one frame.
The GPU can decide to wait until the screen is done before replacing the buffer (vsync) or replace the buffer while the screen is drawing (no vsync).

Moving the buffers from the GPU to the screen would only cause extra delays.
Right now the screen draws what it's receiving immediately.

The bottleneck is the refresh rate of the screen.

Honestly, gamers should just invest in a screen with a higher refresh rate - problem solved.

If vsync's that smart already, what the hell was freesync and g-sync?

How to easily detect some underage kid

How about if you don't have hardware acceleration on the desktop? (So basically, DWM off on Windows, for whatever reason)
In 3D applications I can enable vsync and problem solved, but that doesn't work so well on the desktop or when using windowed applications like for example the browser.
Would using a high refresh rate solve the tearing issues? At 60Hz it is pretty awful the tearing without DWM.

Gulags were, evidently, a good thing.

but vsync limits the fps to the screen refresh rate
with vsync, if i understand correctly, the gpu renders 1 frame as soon as possible, waits until the screen refresh and then sends the image to the monitor to get drawn and repeat
why not just render all the frames you can and send the latest complete one to the monitor

It obviously doesn't work that way without vsync, that's why you get desynchronization, it tries to show multiple frames on the same cycle of the monitor, which is very noticeable at 60Hz.
Vsync is exactly to avoid that, synchronizing what the GPU processes with the monitor speed, it should be always used, just because it isn't noticeable at 144 or 240Hz it doesn't mean it is the correct behavior to just send to the monitor more than it can handle.
You probably notice the input lag because you have a shitty monitor with a huge response time, with a good 60Hz monitor I don't think you should have delay using vsync.

Triple buffering removes tearing, vsync is a meme.

>why not just render all the frames you can and send the latest complete one to the monitor

That's what it does when you disable vsync.
Which leads to screen tearing.

Again: the problem is the refresh rate of the screen, which isn't easy to fix (although 240Hz screens do exist now, which I think should be plenty).

>If vsync's that smart already, what the hell was freesync and g-sync?

All they do is reduce the negative effect of vsync.
But it can never solve it.

Tearing is ok in games, less input lag. It sucks when it is on the desktop

tripple buffering is a form of vsync, dummy

i do have a shitty monitor, but without vsync i don't feel any input lag
how the FUCK would sending the most recent completed frame once for the screen to draw result in screen tearing?
if i understand correctly, without vsync when the gpu doesnt render a full frame it just goes "fuck it, that'll do" and writes over the last full frame with the unfinished part, which are different images when moving, resulting in tears

Maybe if you are an autist playing competitively, but if you were you would not be using a 60Hz screen.
For simple fun it is awful to have screen tearing.

One thing I never understood:
Does Vsync capture input, render the frame, and then wait? Or does it know that the prev frame was rendered in like 8ms, so it waits roughly 8 ms and then polls input and draws the screen in order to minimize input lag at the risk of missing the vertical refresh?

It doesn't cap framerate and doesn't add latency, so I wouldn't say that.

>if i understand correctly, without vsync when the gpu doesnt render a full frame it just goes "fuck it, that'll do" and writes over the last full frame with the unfinished part

No, that's not how it works at all.
The GPU never shows unfinished frames, only completed frames are send to the output buffer.

What happens is:
- The screen always reads from the output buffer LINE BY LINE.
- So if it's a 60Hz screen it takes 1/60s to read the whole output buffer.

When the GPU is done with the next frame it can do two things:
A) replace the output buffer immediately (no vsync)
B) wait for the screen to reach the last line before replacing the output buffer (vsync)

The tearing is simply the effect of the output buffer being replaced while the screen was reading it.
Imagine if you're reading a book and someone suddenly flips the page halfway through: you would make an abrupt jump in the story line.

>It doesn't cap framerate and doesn't add latency

Oh yes it does!

Double buffering means:
- Screen reads from output buffer.
- GPU writes to work buffer.
- The buffers are flipped around (work becomes output and output becomes work) at the vsync signal.

This means the GPU can't do anything between the time it finishes and the time the buffers are flipped, so in comes tripple buffering:
- Screen still reads from one buffer.
- GPU still writes to work buffer.
- When GPU is done it swaps work buffer for a "staged" buffer.
- At vsync the output buffer and staged buffers are swapped.

Without vsync you're always doing double buffering.
Tripple buffering without vsync makes no sense.

>how the FUCK would sending the most recent completed frame once for the screen to draw result in screen tearing?
It doesn't matter if the GPU draws 1 image or 500, the problem is that it has to wait for the monitor cycle to send that frame, and until that happens it sits idle, and if your card isn't fast enough to draw a frame before the cycle ends, it will wait until the next cycle to finish drawing the frame and send it, maybe one of those scenarios is causing the lag you notice.
Or possibly the monitor response time in ms could affect this, I'm unsure because I never noticed any lag on 60Hz with vsync.

>Does Vsync capture input, render the frame, and then wait?
If it renders the frame before the monitor cycle ends, yes, it will wait until the next cycle to continue working.

Small correction:

With double buffering the buffers are only swapped IF the GPU is done rendering the next frame before the vsync signal occurs.
If not it will do nothing, so the screen will re-draw the last frame.

Cool. Except I said TRIPLE buffering

And you didn't read the 2nd half of that post?

With triple buffering GPU isn't blocked at any stage, and output could be done as soon as possible, and there won't be any tearing because the frame is complete.

/thread

"vsync" commonly refers to "double-buffered vertical syncronization", which means you have two framebuffers (blocks of memory which hold the screen contents)
the idea is that you have a "front" buffer, which is what is being sent to the screen, and a "back" buffer, which is what the program is drawing to. the program draws to the hidden backbuffer, while the frontbuffer is being scanned out to the display, once the frontbuffer has been sent out completely, there's a pause, called the vblank period, before the next frame gets sent out
during this vblank period, the buffers are "flipped", meaning they switch places, making the newly-drawn backbuffer contents the new frontbuffer, which gets scanned out next
this whole process gives software time and assurance that each visible frame is exactly what they want it to be, with no tearing (tearing being caused by the frontbuffer (or only buffer, as the case probably is) being updated by software /while/ being scanned out to the display)

Most of the input lag problems come from poor implementations of vsyncing.

ps. it's also possible to do vsync with only one buffer, but it requires the software to written in a strictly-timed manner, and to run with realtime priority
old consoles and the like did this, since the games had total control over the hardware, the software knew what would happen and when, as it was doing everything
modern software on multi-tasking OS's with various different hardware configurations don't have this luxury, and must be written to be tolerant of timing inaccuracy
double-buffering, for example, gives software 1 frame worth of time to draw their next frame, the only timing requirement is that they're done before the next vblank

If a game renders at 200 fps (5 ms/frame) and you enforce vsync at 60 Hz (16,6 ms/frame), then there obviously has to be an 11,6 ms of wait time.
You would have to first poll the input and then start rendering. But do you wait 11,6 ms, thus adding 11,6 ms of input lag, or do you delay rendering such that the polled input that the frame is rendered according to is more recent?

the naïve approach would be for the software to render as quick as possible, which would make the next frame "older" than it needs to be, but with a higher chance of being ready in time
if the software knows how long it will take to render, it can try to do it's rendering closer to the end of the scanout, which will result in a more "recent" image, but it then risks being too late if there was unexpected delays during rendering

give a good example of vsync
i'm genuinely curious because i've run into it in a lot of games and it' absolute trash

If I remember correctly, Unreal's was perfect.
I also remember that in Dead Space, if you enabled Vsync in game the input lag was horrible, but if you enabled it in the driver it was fine (or visa versa, I can't remember which now).

It causes input lag

>Dead Space, if you enabled Vsync
Dead space forced a 30 Hz rate when using Vscync which is why it was horrible

i suppose a reasonable solution would be to say, keep track of recent rendering times, then add a little on top of that for error, then use that as the time to begin rendering the next frame
so say, if the program rendered the last frame in 8ms, and your display is 16.6ms/frame (60Hz), the program could wait say, 6.6ms after a vblank before beginning the next frame, which gives it 2ms of error, instead of 8.6, but with the advantage of the frame being 6.6ms "newer" than otherwise

Retard that is definitively how double buffering works. You know the thing that cumulatively microcock and GPU manufacturers have not been able to properly do even though knowing about it for 30 years.

Why is double buffering still a thing. It's choking GPU for a miniscule memory gain.

I'm too used to very little input lag. Playing anything with vsync feels like everything is in slow motion

Thing which you described is triple buffering FYI:
one buffer holds frame which is sent to display
second buffer holds last complete frame
third one holds the frame which is rendered now
This is not how the shit works. Such game renders from 2 to 4 frames in each 16,6ms period and only the newest image is output, making resulting wait 8.3ms max and total latency 24,9ms max and 16,6ms min. The smaller the difference the bigger the jitter is.

Basically, tripple buffering would already be ideal without freesync if games were able to render several times
more fps than refresh rate.

>The smaller the difference the bigger the jitter is.
I mean the difference between display refresh rate and FPS and the jitter of output lag.

Unless you're using a controller lower input lag is always more preferable to less screen tearing. Cap your frame rate at a multiple of your refresh rate if it's really that bad, but using KB&M with Vsync is fucking awful