/mpv/ - open source, and cross-platform media player

Last thread: Installation:
mpv.io/installation/

Wiki:
github.com/mpv-player/mpv/wiki

Manual:
mpv.io/manual/stable/

User Scripts(including opengl shaders):
github.com/mpv-player/mpv/wiki/User-Scripts

input.conf:
github.com/mpv-player/mpv/blob/master/etc/input.conf

Vulkan(Linux only for now):
github.com/atomnuker/mpv

Test vulkan and post logs if it gives you any kind of problems.

Other urls found in this thread:

github.com/bjin/mpv-prescalers/commits/ravu
mpv.srsfckn.biz/
twitter.com/NSFWRedditImage

So is there a script or something to drag-and-drop external audio tracks to currently playing video?

Not sure, but why though? mpv can use external audio tracks automatically. Search the wiki for "audio-file-auto=".

Linked mkv support, volume control, context menu, making your own wiki so you don't have to spam Sup Forums with these threads, and apology to madshi for sitting in his thread 24/7 stealing his ideas... when?

>madshi
whos that?

the main developer of VLC

Is it that one, or the other player pushing a billion downloads?

Thanks.

I don't know, but mpv runs on 3 billion devices

bjin, gib RAVU.

github.com/bjin/mpv-prescalers/commits/ravu
what's there to give?

Most software will run on billions of devices. That says nothing.

madVR will only run on 4.66% of desktop PCs though (windows XP)

I need a ready to use shader file.

mpv won't run on XP, which is a bigger install base than it has downloads.

Raaaaaaaaaaaavuuuuuuuuuuuu

Why should I use this instead of KCP?

Do you have something against volume controls and sensible keybinds, and have a burning desire to spend 12 hours in a Sup Forums help thread each and every time you want to use a video?

Are there any Windows builds with vapoursynth support yet?

mpv gives me better colors in anime than mpc-hc+madvr for some reason. Truly made by weebs for weebs.

Reset the saturation.

Epic.

so download it and generate the glsl file?

Do you have an Nvidia GPU?

There are the jenkins builds...

No. AMD.
Literally who is that?

There is jenkins in the url but I don't think it's a name.

Is RAVU intended to be faster than NNEDI3?
Is RAVU intended to be better than NNEDI3?
Is RAVU a concurrent for FSRCNN?

The SVP authors are providing mpv + vapoursynth binaries for their Pro customers.

Yeah to all 3. Plus it supposed to BTFO NGU.

Does it mean they paid to get foss software?

MPV is master race

>inb4 public tracker rip

Why are the controls over the picture?

Only when you move your cursor.

it's only there if I move my mouse. I like it that way

What is that? 2/2 Blackberrys? Two pagers?

two subtitles

Why wouldn't the speech bubble be subs?

are you trolling? because that's the audio track

>speech bubble
>speech
Good question, user.

A speech bubble is the visual representation of spoken language. i.e. subs.

What would you make subs look like?

...

Pretty much.
The difference between SVP Free and Pro is the ability to fine tune some options (not really important) and being able to use SVP with VLC and mpv/Plex/IINA/etc.

In the case of mpv, they provide you with mpv+vapoursynth builds.

Why do you know this?

It's all public information, listed on their wiki...?

Wanted to quote of course

Why'd you read their wiki?

To not stay an uninformed fuck?

Very suspicious...

Yes, looks like a Russian secret agent...

Shinchiro im waiting new build tomorrow, thanks! ;)

The ones I found appear to be missing the necessary dlls.
Why do I have to pay for something that I am supposed to be able to obtain for free?

Yes, SVP also sells stolen code from the GPL'd MVtools

>Why do I have to pay for something that I am supposed to be able to obtain for free?
wat? SVP has always been paid proprietary garbage

Why is the UI baked into the frame? Such a bad decision.
Also does mpv contribute anything to decoders and backend stuff or does it just repackage it into a lackluster UI and also its branding into it?

>Why is the UI baked into the frame?
It isn't?

>Also does mpv contribute anything to decoders and backend stuff or does it just repackage it into a lackluster UI and also its branding into it?
lol this guy

>it isn't?

It fucking is you idiot. The UI pops up and lays on top of the video frame, covering up and disrupting the video. It's like if you had a music player that played a little fucking jingle on top of the audio every time you changed songs.

Don't worry, I'm sure somebody will come along and write a script to send the OSD messages to your lineprinter so they don't get in the way of your anime

This is "disrupting the video" ahaha

Doesn't mpv+vapoursynth+mvtools handle real time interpolation just like SVP?

Good job avoiding addressing the issues I was bringing up user... Wikipedia doesn't even go as to explain why anyone would bother using this piece of shit. It says it's a media player
>Amedia playeris acomputer programfor playingmultimediafileslike videos movies and music
is my web browser a media player? is YouTube a media player? is the driver for my sound card a media player? Is Windows a media player? is the program in my amp a media player?
I see that mpv has been repackaged into...Baka MPlayer, aqt5-based front-end[15]GNOME MPV, a simpleGTK+front-end[16]SMPlayer, can be built with mpv instead of MPlayerbomi (formerly CMPlayer) aqt5-based front-end[17]IINA, a modern and feature-rich player based on mpv for macOS.

Why doesn't mpv read ebooks? printed media isn't dead to me!
Why can't I make VoIP calls through mpv? I bet I couldn't surf the net with mpv.
It's awfully cumbersome to cook in a UI with a low level graphics programming language

if you have a powerful enough CPU? in principle, I suppose

Vapoursynth is sort of weird though. Not really ideal for this sort of thing. If you could convince the MVTools people to come up with a better API than the VS horseshit then it could be done better, perhaps even directly in mpv / GPU-accelerated

and he also outs himself as a stupid phoneposter that can't copy/paste correctly

this just keeps getting better

Looks like the paste went thru on my tablet
I deal with mediocre software every day so try not it let it get to you
(we're on 4 c h a n)

Looking into their forums, it seems like they reached an agreement with the original MVTools author but didn't disclose it. So it should be impossible to sue them.

SVP runs on GPU and can do this with a low voltage 2-core i5 iGPU (my laptop can do it) with high quality settings, no problem, while MVTools runs on CPU and even a i7 4770K struggles with 720p and mediocre settings.

Jesus christ user, learn english, then learn to type, then come back

I just downloaded the latest build from mpv.srsfckn.biz/

When I go fullscreen, I can't Alt+Tab like I used to. The fullscreen video stays on top. How do I disable this?

Also, the seekbar/toolbar at the bottom of the video keeps blocking the subtitles. Is it possible to shift the bottom bar to the top?

haasn have you tried to train FSRCNN using bicubic instead of lanczos (ANTIALIAS in utils.py)?

is there a windows installer that will update itself? don't like stand alone exe's

FSRCNN trained with scale 2 (10 epochs, psnr)

That's a nice mosaic picture!
Good collage, kid!

New build is out boys! :O Thank you shinchiro!

Sourceforge build has a updater script but you have to unzip a build first (once) into a non restrictive folder. You can also create your own inno installer, there are http/download functions included, so it's easy to auto-download the last sourceforge build and create a scheduled task from it.

Sounds like too much hassle. I will stay with VLC.

Lazy and fatty kids are not allowed here!

foozoor is not allowed here

Back to doom9 i go.

I installed MadVR and now everything OpenGL plays at like 10 frames per second

How do I fix this?

>everything OpenGL
Huh?

Do not run madVR at the same time!

I have perfect playback on T420 with win 8.1 and stutter on Linux Mint. Can someone recommend GNU/Linux friendly config?

that's unlikely to have to do with the madvr installation, because madvr doesn't even get active unless you do something directshow.

try reinstalling your gpu drivers.

Where is the NGU shader? Its not on user-scripts page!

what NGU shader?

you can change to topbar or increase the subtitle margin. check the manual on mpv.io.

What about the videos being always on top when fullscreen? Do you have this problem, too?

FSRCNN 2x scaling works like 3x now.
I think I know how to make it work completely correctly, but I don't know how to fix this error
>Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 6, computed = 10

not sure if it helps you, but this diff seems to fix upscaling with different strides/scales (for me)

diff --git a/utils.py b/utils.py
index a9a6541..098e875 100644
--- a/utils.py
+++ b/utils.py
@@ -221,8 +221,8 @@ def train_input_setup(config):
else:
h, w = input_.shape

- for x in range(0, h - image_size - padding + 1, stride):
- for y in range(0, w - image_size - padding + 1, stride):
+ for x in range(0, h - label_size + 1, stride):
+ for y in range(0, w - label_size + 1, stride):
sub_input = input_[x + padding : x + padding + image_size, y + padding : y + padding + image_size]
x_loc, y_loc = x + label_padding, y + label_padding
sub_label = label_[x_loc * scale : x_loc * scale + label_size, y_loc * scale : y_loc * scale + label_size]


Here is the output of my partially trained 2x FSRCNN (MS-SSIM)

Shiandow's experimental deband. Yay or nay?

My changes:
@@ -224,8 +224,8 @@
for x in range(0, h - image_size - padding + 1, stride):
for y in range(0, w - image_size - padding + 1, stride):
sub_input = input_[x + padding : x + padding + image_size, y + padding : y + padding + image_size]
- x_loc, y_loc = x + label_padding, y + label_padding
- sub_label = label_[x_loc * scale : x_loc * scale + label_size, y_loc * scale : y_loc * scale + label_size]
+ x_loc, y_loc = x * scale + label_padding, y * scale + label_padding
+ sub_label = label_[x_loc : x_loc + label_size, y_loc : y_loc + label_size]

sub_input = sub_input.reshape([image_size, image_size, 1])
sub_label = sub_label.reshape([label_size, label_size, 1])
@@ -268,8 +268,8 @@
for y in range(0, w - image_size - padding + 1, stride):
ny += 1
sub_input = input_[x + padding : x + padding + image_size, y + padding : y + padding + image_size]
- x_loc, y_loc = x + label_padding, y + label_padding
- sub_label = label_[x_loc * scale : x_loc * scale + label_size, y_loc * scale : y_loc * scale + label_size]
+ x_loc, y_loc = x * scale + label_padding, y * scale + label_padding
+ sub_label = label_[x_loc : x_loc + label_size, y_loc : y_loc + label_size]

sub_input = sub_input.reshape([image_size, image_size, 1])
sub_label = sub_label.reshape([label_size, label_size, 1])
@@ -361,7 +361,7 @@
return g / tf.reduce_sum(g)


-def tf_ssim(img1, img2, cs_map=False, mean_metric=True, size=11, sigma=1.5):
+def tf_ssim(img1, img2, cs_map=False, mean_metric=True, size=10, sigma=1.5):
window = _tf_fspecial_gauss(size, sigma) # window shape [size, size]
K1 = 0.01
K2 = 0.03

But correctly it should work only if you set scale_factors in model.py to [10, 20] instead of [14, 20].

bjin, you? Im a huge fan!

It also looks better with my patch

>freetards will settle for worse quality

Also I used BICUBIC instead of ANTIALIAS.

who r u?

"igv"

Why did KCP have to die I hate using mpv.

Digital video is a nightmare from my cursory understanding. My points about the shit gewy aren't really relevant if the project has technical merit and anons get an understanding of how to unbreakable and consolidate the myriad of different technologies involved in interpreting moving pictures and audiophiles. In my opinion mpv ought to try to be a sort of Rosetta Stone for non interactive media.
KCP is the comfiest software! It's clunky but I love it and still use it by default on my desktop.

Damn smart phone LMAO more like dumb phone xd mm FCK autocorrect UnU