/mpv/ - the Sup Forumsreatest media player

Installation:
mpv.io/installation/

Wiki:
github.com/mpv-player/mpv/wiki

Manual:
mpv.io/manual/master/

User Scripts (including opengl shaders):
github.com/mpv-player/mpv/wiki/User-Scripts

High quality video output profile (goes into mpv.conf):
profile=opengl-hq
mpv.io/manual/master/#configuration-files

Other urls found in this thread:

youtube.com/watch?v=ur8D-kWvsss
youtube.com/watch?v=Wjb6CSe4708
wiki.archlinux.org/index.php/Mpv#SVP_4_Linux_.28SmoothVideoProject.29
sniklaus.com/publications/sepconv
svp-team.com/wiki/Manual:SVPflow
bluesky23.yukishigure.com/en/BlueskyFRC.html
pastebin.com/raw/bYCVbDFd
github.com/igv/FSRCNN-TensorFlow/releases
github.com/mpv-player/mpv/issues/2685
youtube.com/watch?v=aT-oXygGf24
github.com/mpv-player/mpv/issues/2685#issuecomment-250985264
twitter.com/SFWRedditImages

*mpv.net

Haasn is not a .net retard but an haskell one!

Hi, mpv people!

In windows there is a player called splash pro ex. It somehow "adds" more framerate artificially to the video, I don't know how

Does MPV have a similar feature to that?
youtube.com/watch?v=ur8D-kWvsss

Who? Does he work on mpv.net?

you mean interpolation, in which case, yes.
video-sync=display-resample
interpolation
tscale=oversample

Okay, I got this now
x11-bypass-compositor=no
profile=opengl-hq
scale=ewa_lanczossharp
cscale=ewa_lanczossoft
dscale=mitchell
scale-antiring=0.7
cscale-antiring=0.7
dither-depth=auto
correct-downscaling=yes
sigmoid-upscaling=yes
deband=no
volume-max=100
hwdec=auto

volume=35

video-sync=display-resample
interpolation
tscale=oversample

dscale=mitchell
scale-antiring=0.7
cscale-antiring=0.7
dither-depth=auto
correct-downscaling=yes
sigmoid-upscaling=yes
delete this

who the fuck cares about mpv.net? it has no features or improvements over the default cplayer and uses the shitty broken wid embed mechanism.

antiringing is currently disabled for ewa scalers

Done

Good. They're already on profile=opengl-hq and

Is there any config files to have these maximum tweaks on so that I can download this into the scripts directory and forget about it?

shiggy diggy

vo=opengl-hq:scale=ginseng:cscale=ginseng:swapinterval=1:temporal-dither:dither-depth=auto:fbo-format=rgba16f:scaler-resizes-only:sigmoid-upscaling:pbo:interpolation

upgrade your mpv to something from this year fucking hell

Hello 2015.

mpv does not have a feature like that built in but mpv can be configured to use SVP as input. SVP implements that feature as well as black bar lighting.

Here's video.
youtube.com/watch?v=Wjb6CSe4708

You can read more about it here:
wiki.archlinux.org/index.php/Mpv#SVP_4_Linux_.28SmoothVideoProject.29

Note, creating a profile is not the best way to use it and the mpv arch wiki page currently contains a lot of errors (the profile section in particular is wrong).

There's a lot of confusion in terminology since a lot of it gets used for different things in different circles. The sort of interpolation that Splash Pro's Motion2 and SVP implement is called "motion interpolation". The techniques they use however tend to produce a lot of bizarre artifacts that look like garbled limbs wobbly movements. For instance, in the youtube video you posted pay ver close attention to the human during the 0:19-0:21 period. For SVP you can notice the artifacts in the video I linked by looking closely at the credits text as the video plays. That isn't to say that good motion interpolation is impossible, but it is currently probably a few years away from being practical. Here is an example of a cutting edge technique using deep neural nets on high end graphics hardware that runs very far from real time.
sniklaus.com/publications/sepconv

Whether or not SVP is a good idea to use may depend on the content you are viewing (eg. action vs drama).

The normal behavior of video players is called a 3:2 pulldown that uses frame repetition to go from a lower framerate to your display's refresh rate (eg. 60hz).

mpv's "smoothmotion" technique is different from motion interpolation. Instead it is an improvement over 3:2 pulldown that interpolates some frames (actual frame itself, not motion within the frame) in order for the repeated frames to be played a more even number of times (eg. 2.5:2.5 pulldown).

>SVP is now closed source
dropped

svp-team.com/wiki/Manual:SVPflow

SVP has always been closed source

It seems pretty easy to decompile but who can like that soap opera effect!

>mpdn was/is a true shit
>madvr destroyed mpdn
>mpv destroyed madvr

>The sort of interpolation that Splash Pro's Motion2 and SVP implement is called "motion interpolation".
How do you know? I checked their website and found no explanation of what they are actually doing.

Also keep in mind that SVP is a fork of MVTools and while it's slower, it's definitely usable.
Could also be they are using something like bluesky23.yukishigure.com/en/BlueskyFRC.html (that is, AMD's FluidMotion) because their website only lists very specific conversations (24,30 -> 60) which is also the case for this AMD shit while SVP/MVTools can do arbitrary conversions.

Damn why is nnedi3 still that slow with madvr?
And damn why has he still not optimized his ngu-aa for animes to compete with ravu?

Isn't madVR way older than MPDN??

>mpff will destroy mpv

Maybe but madvr totally destroyed mpdn, but mostly because his dev gave up. Now mpv beats everything, ewa scalers are 4x faster than ones from mpdn and that's truly awesome for an opengl based program!

There are 50 billion motion interpolation techniques and on average, not a single one of them is going to be identical - or perhaps even comparable - to any other one.

What MVTools and SVP are doing is based on block-by-block motion vector analysis. They use a block search algorithm (similar to x264 etc.) to find similar blocks in both frames in order to determine the local motion vector for that block. This motion vector is then interpolated per-pixel while also accounting for stuff like optical masking and stuff; and then the pixels are reflowed along the interpolated motion vectors to form the intermediate image.

SVP also implements some heuristics to turn off the pixel-based interpolation for blocks that are too dissimilar; which you can tune via the settings. (Relaxing this heuristic makes the interpolation more aggressive but results in way more artifacts). Not sure if SVP does this but it would also be possible in principle to aggregate the per-block motion into a global motion vector to detect pans and only smooth out those; without affecting “action” too much.

Wait and see...

Natively inside mpv when?

> mostly because his dev gave up
Sad to hear, their devs did more good than madvr ever did. Their shaders are open source, bjin and igv based their work on theirs and further optimized algorithms for much better performance

I think .net was a bad idea...

Did you reply to the wrong person?
Or maybe I worded that bad. I wasn't asking what SVP is doing, I know that already (well, not really but I do know MVTools and just assume they didn't change too much). I wondered how you can tell that this mentioned media player is doing what SVP is doing. The spout a few fancy sounding buzz words but never mention what they actually do.
For all we know, they could do some cheap stuff like mpv's interpolation and use incorrect terminology for marketing purposes.

Would be easier if there were a decent API for MVTools. Haasn declines to touch it without one. But he also doesn't want to make one himself so...

Shiandow should give up too or join the mpv commiters.

Good? The whole project was cancer.
MadVR at least had the decency to have it all proprietary. The MPDN guy tried to have a proprietary player and at the same time leech work from the open source community without giving anything back.
Yeah, no thanks.

>Decent C API.
Is it even exist?

Whenever somebody gets bored enough to implement it, I guess. Block motion search wouldn't actually be too difficult, the difficult part would probably be pixel re-flowing while accounting for occlusion.

There was also this recent discussion about a paper that implements motion interpolation by using a convolutional neural network to generate a convolution mask directly, rather than using it to try and detect optical flow / motion vectors. This allows it to innately learn how to deal with masking/occlusion, blurry regions, artifact avoidance, etc - while also being easier to train blindly on regular video files rather than specially labelled optical flow training sets. The downside of this technique is that it would be pretty slow to implement without some major breakthrough; although the separable convolution approach might have some merits here.

>Did you reply to the wrong person?
I was replying to the discussion as a whole. My point was that just because two things both advertise “motion interpolation” doesn't mean they're even remotely doing the same thing.

Doable in one week by motivated haasn! ;)

And I'm claiming they might not even do motion interpolation, so once again I ask you: how do you know?

Know what??

yes, much more good. mpdn ported nnedi3 from opencl to shaders, implemented ssim shaders and gave them back. madshi leeched much more foss and did bare minimum to keep uphold licenses.

Are you a retard?

>decide to look through the mpv source code
>look for main function to start reading the code
>see this:
pastebin.com/raw/bYCVbDFd

what the FUCK guys, why are there so many?

which one is the correct main function???

best shaders for chinese cartoons pls?
I like em sharp and edgy, like my personality

What's the decoder+vo framedrop mode for? It only says it's not recommended on the manual. Is it simply less efficient than decoder or vo mode alone?

osdep/main-fn-unix.c, osdep/main-fn-win.c and osdep/win32-console-wrapper.c for windows, and osdep/main-fn-cocoa.c and osdep/macosx_application.m for mac os
why different ones for windows and mac os? the source code explains better than i can here; windows does funny stuff with regards to the differences between console and gui applications, and mac os needs the NSApplication singleton to use cocoa features.
the real cross platform entry point is mpv_main() in player/main.c

FSRCNN.

further clarification: mpv uses cocoa to get access to mac os features, and cocoa requires the start of execution to be on the main thread, and on windows, main-fn-win.c fixes stdout, stdin and stderr, fixes some windows nonsense (loading DLLs from %PATH%, heap corruption checking, removing some error boxes) and converts windows' UTF-16 command line to UTF-8.

I see, thank you for the explanation.

>FSRCNN
>Prescaler based on layered convolutional networks. Pretty slow, so mostly suitable for still images rather than realtime video playback.
user why

anime watchers in this thread have vouched for it in this thread for realtime playback before

I have 750 Ti, anything you suggest adding/changing or is this fine?
profile=opengl-hq
scale=ewa_lanczossharp
cscale-ewa_lanczossharp
video-sync=display-resample
interpolation
tscale=oversample

>cscale-ewa_lanczossharp

profile=opengl-hq
opengl-backend=dxinterop
opengl-pbo=yes
opengl-early-flush=no
scale=haasnsoft
scale-clamp=0.5
cscale=ewa_lanczossharp
sigmoid-slope=10.0
vd-lavc-threads=16
vd-lavc-dr=yes
opengl-shader=~~/shaders/ravu-r4-smoothtest1.hook

r8

cscale=bilinear

Hey this is pretty autistic and retarded, but I messily articulated why trying to loop mpv by default was giving me grey hairs . A few threads ago someone helped me get looping by default, so when the topic came up I shared my awkward experience. Not trying to personal army request or anything but I figured those more tech aware can cringe at how retarded I was I guess.

interpolation

video-sync=display-resample
tscale=robidouxsharp
interpolation=yes
deband-iterations=2
deband-range=12
temporal-dither=yes
blend-subtitles=no

>cscale-ewa_lanczossharp
typo'd, but don't use ewa_lanczossharp for cscale, just increases ringing and looks worse than spline36 or haasnsoft

>vd-lavc-threads
>opengl-early-flush
unnecessary
>scale-clamp
>sigmoid-slope
placebo
>cscale=ewa_lanczossharp
use bilinear
>scale=haasnsoft
what the fuck
>tscale=robidouxsharp
>deband-iterations
>deband-range
>blend-subtitles=no
why

opengl-backend=dxinterop
scale=ewa_hanning
scale-radius=3.2383154841662362
scale-blur=1.055
cscale=haasnsoft
dscale=mitchell
correct-downscaling=yes
sigmoid-upscaling=yes
sigmoid-slope=10
deband=yes
deband-grain=32
deband-iterations=2
deband-range=12
deband-threshold=48
dither-depth=auto
temporal-dither=yes
video-sync=display-resample
vd-lavc-dr=yes
opengl-shader="~~/shaders/ravu-r4-smoothtest1.hook"
r8

>vd-lavc-threads
actually helps
>placebo
that's the point
>use bilinear
only when the source video is 1080p
>what the fuck
looks better than lanczos
>tscale=robidouxsharp
oversample is too juddery, mitchell is too blurry
>blend-subtitles=no
because i don't want my subs being affected by interpolation or other filters there's literally no reason for it.

watch out nerds
profile=opengl-hq
opengl-backend=dxinterop
scale=ewa_lanczossharp
cscale=bilinear
video-sync=display-resample
interpolation
tscale=oversample
screenshot-directory='~/Desktop'
msg-color
msg-module
no-osc
no-taskbar-progress
quiet
hr-seek=yes

Thats old version.

Here you go user.
github.com/igv/FSRCNN-TensorFlow/releases
They are very fast.

RAVU > FSRCNN

>oversample is too juddery
Video with oversample cant have any judder. Unless its in the source of course. Yours is just more blurry therefore looks smoother.

Is it faster than NGU Sharp (Low)?

>Video with oversample cant have any judder
oversample doesn't mask the imperfect interpolation so you can see it skip or 'judder'

Yes or on par.
Theyre very different. Both great at what they do.

>Both great at what they do.
can you elaborate on this? they are both attempting to do the same thing so comparing them should be valid

tscale=mitchell :)

Madshi seems pretty quiet lately.
Is he concocting something crazy for us?

fuck off kike

Does he usually interact with people?
I don't follow madvr development.

Why so many video player devs are from Germany?

autismus

>How do you know? I checked their website and found no explanation of what they are actually doing.

You are right to doubt since their website is vague as fuck. The main way I "knew" was by looking at forum posts comparing it with other approaches and watching video samples. Which is to say I don't know for sure. As another user pointed out there are a lot of approaches to this (many newer TVs even have one built in that can be enabled in the settings).

Also, I'd been away from this thread all this time. There's no need to be rude to other people as in: My understanding is that SVP is based on mvtools2 (or at least it was in earlier versions) so I'm not really sure that there's much to be gained from native mpv support since it produces quite a lot of artifacts as is. I am looking forward to newer techniques but as you said it's probably not gonna happen any time soon.

Personally I like the black bar lighting that SVP does far more than the motion interpolation. Now that is something I would like to see getting native support in mpv.

There's some interesting discussion on tscale settings here:
github.com/mpv-player/mpv/issues/2685

ffmpeg's new motion interpolation filter "minterpolate" isn't half bad.
youtube.com/watch?v=aT-oXygGf24

eta till we have the hardware to do it in realtime

just tried it and it was running with like 0.5 fps

Add that to mpv pls!

How do i enable it?

mpv --vf=lavfi=[minterpolate=fps=60000/1001:mi_mode=mci]

Looks pretty good. Way better than mpv native.

Dumbass.

Looks like arse on anime.
mpv doesn't do motion interpolation, only frames blending.

Why is it so slow? I have very low frametimes and CPU load yet it stutters!

Because it's not made for realtime.

Can i convert my videos through mpv with it?

It has no SIMD; completely unoptimized

Probably, but it'd be easier to use ffmpeg directly. It's going to take forever though.

Can i force big cache, play video at 0.5 fps and then rewind back and replay at full speed since it got cached?

yes if you wait a few hours until the cache is filled...

Even if you could (which I'm not sure) it wouldn't be worth it, be reasonable here.

forget what i just said, it wont work at all

how new is it

...

Not very, some guy was mentioning it a year ago
github.com/mpv-player/mpv/issues/2685#issuecomment-250985264