Why do so many retards recompress 24-bit (8bpc) video as 30-bit (10bpc) video...

Why do so many retards recompress 24-bit (8bpc) video as 30-bit (10bpc) video? Do these idiots not realize you can't magically add more information by making the bit depth larger than the source? It's particularly virulent on which is known for being extra-retarded.

Other urls found in this thread:

x264.nl/x264/10bit_02-ateme-why_does_10bit_save_bandwidth.pdf
builds.x265.eu
twitter.com/AnonBabble

10-bit animes is the same as lossless audio: placebo bullshit.

reduces banding

It also nullifies hardware compatibility. Honestly, the best argument against piracy of anime are the pirates themselves, as everything is either shit quality or literally unplayable on most devices.

Every shitty fucking pc will be able to play it. If you want to watch shit on your smartphone get another encode as you wouldn't see the benefits anyway.

The reason it started was due to the way it reduces or eliminates banding and reduces filesize if used properly.
Like most things used with good intentions, it's just a checkbox that poor encoders see and think bigger numbers = better and gets used improperly instead.

It was never intended to be playable on hardware devices. Most hardware players dont even support matroska containers and require you to repackage it as mp4, killing subtitles and multiple audio tracks, not to mention linked chapters. That means you would need a reencode regardless of whether its 10bit or not.

I can't tell the difference between 256 colors and "millions of colors". I just pretend I see the difference when I'm around others so they don't think I'm stupid. I also can't tell the difference between high bitrate audio and low bitrate mp3 files. I just say I can totally hear it when someone confronts me about it. Sometimes I wonder if everyone is just like me and we're all just pretending to notice differences that we can't really discern.

>it reduces or eliminates banding and reduces filesize if used properly.
What, that sounds false. But please explain

Not everyone lies about it. Some people do have a higher capability for seeing colors and/or hearing a wider range or frequencies. Most people just want whatever has the biggest numbers and act like its better to feel superior because they have nothing else in their lives to feel good about.
Personally, if I can't see or hear a difference I say so, or just grab the version thats good enough, like 720p instead of 1080p for most anime because it's almost all made natively at 720 anyway. And 1080 is just a shitty upscale.

That statement is what Daiz, the original proponent of 10bit encoding showed when he was first presenting it as an option on Sup Forums.
I can't explain it too well myself, but the gist of it is that the 10bit encoded color data can't be displayed on your 8bit color depth display, so when decoding it, your video player averages out the color data, resulting in smoother dithering and color transitions, meaning you can use a lower bitrate to maintain a similar visual quality to an 8bit encode at a higher bitrate.

If you can see this, you need more bits.

reminded me of the time when i had a 40GB computer.
90% was mp3s.
instead of deleting older mp3s, i decided it would be a good idea to re-encode them to a lower bitrate.
at about 96kbps-128kbps i could hear the difference vs a 192/320kbps file.
i couldn't hear the difference between 192 and 320 though.

Increasing the bit depth minimizes aliasing while editing. I think what Daiz is talking about is using post-processing to smooth out banding in the raws (which is an artifact of compression at the TV station). I guess after that there's no reason to go back to 8-bit, since the encoder and player support 10-bit.

Disclaimer: I have no idea what I'm talking about.

Because it turns out that doing that saves bandwidth.
x264.nl/x264/10bit_02-ateme-why_does_10bit_save_bandwidth.pdf

>Do these idiots not realize you can't magically add more information by making the bit depth larger than the source?
Do you realize lossy encoding introduces artifacts that were not present on the original source?

>Some people do have a higher capability for seeing colors and/or hearing a wider range or frequencies.
Spotting compression artifacts has actually little to do with your capabilities and much more with experience. Kids have better hearing than adults but you can make them listen to a 96kbps mp3 and they won't even notice everything above 12kHz is missing. If you don't know there are differences you just won't find them.

Last quote was meant for

Reencoding dramatically increases the chances of compression artifacts popping up, that's why FLAC exists. 192kbps mp3 is just at the limit of transparency though

>tl;dr
Using 10-bit encoder increases precision of colors which results in better compression efficiency and color reproduction even on 8-bit monitors.

8-bit encoder: 256 shades of RGB pallette
10-bit encoder: 1024 shades of RGB pallette

It's bits per channel per pixel, not bits per pixel you fucking retard

8bits times 3 channels (rgb) = 24 bits
10bits times 3 channels = 30 bits
remake your image with that and literally nobody will be able to tell the difference because we all have 8bpc screens

>remake your image with that and literally nobody will be able to tell the difference because we all have 8bpc screens
No shit you fucking mongoloid. We're talking about lossy video encoding here. 8-bit video encoders leave color banding rampant throughout the video as a side effect of low color precision while encoding. 10-bit video encoders leave significantly less color banding especially at low bitrates which is what everyone uses for yify-ish video file sizes.

This guy gets it

to have everything in the same bitrate/format

How about 12-bit? Can't seem to encode the video into it as handbrake gives me an error and doesn't encode

Stop using x265. It's still woefully incomplete. It has massive issues such as changing the colour of things in between frames as well as being generally slow.

I spent a lot of time researching the differences in encoding techniques as part of my job and the best explanation I can give you is this quote:
"So why does a AVC/H.264 10-bit encoder perform better than 8-bit?
When encoding with the 10-bit tool, the compression process is performed with at least 10-bit
accuracy compared to only 8-bit otherwise. So there is less truncation errors, especially in the motion
compensation stage, increasing the efficiency of compression tools. As a consequence, there is less
need to quantize to achieve a given bit-rate.
The net result is a better quality for the same bit-rate or conversely less bit-rate for the same quality:
between 5% and 20% on typical sources."

In other words, when you have to truncate to 8 bits you get a lot of round errors. And as rounding errors behave like "noise" they can negatively affect the compression rate/picture quality. When you have a 10bit video though you get lesser rounding errors (eg. less video noise) and the following stages can work better with the material theyve been handed to.

Try compressing a video of TV Static in h264 8bit and you will see that it is nearly unwatchable at any bitrate.
NOISE IS KILLING COMPRESSION BECAUSE IT DEFIES ANY RULES OF LOGIC AND IS THEREFORE NOT PREDICTABLE. alot of video compression is about prediction tho

Source:
x264.nl/x264/10bit_02-ateme-why_does_10bit_save_bandwidth.pdf

Use tye latest x265 encoder with staxrip you mongs.

builds.x265.eu

Someone explain to me why x264 10bit is better at compression than x264 8bit but with x265 this isn't necessary the same case.

But the jpg has 8bit and can show the image on the left???

>Someone explain to me why x264 10bit is better at compression than x264 8bit but with x265 this isn't necessary the same case.
Actually it's pretty much the same case. 10-bit encoders = better accuracy, less artifacts

The point of the image is to simulate what you would see if you used a 10-bit video encoder vs a 8-bit encoder.

In fact you don't even need a 10-bit monitor to get the benefit of 10-bit video encoders. 10-bit video dithered to 8-bit video on a 8-bit display will still show significantly less color banding and artifacts.