STOP If the source video is not 10-bit, do not encode it as 10-bit. Contrary to internet rumors, you gain nothing by encoding 8-bit video into 10-bit. It's like encoding an 8-bit MP3 as a 16-bit MP3, you gain NOTHING. It doesn't smooth out gradients, it doesn't have better compression. If the gradients are fucked then either they're fucked at the source or your monitor is not calibrated. If they're fucked at the source, they'll still be fucked at 10-bit. If you dither them you're actually harming the video quality by adding noise. 10-bit does not compress better. If you're going to quote that retarded first result on Google (x264.nl/x264/10bit_02-ateme-why_does_10bit_save_bandwidth.pdf) please note they're comparing MPEG-2 to HEVC, not HEVC 8-bit to HEVC 10-bit.
Please, please, please stop being retarded and releasing badly encoded videos.
That's it. Truth is, you can't summon him any more. He summons himself. He got too powerful.
Liam Brown
It's shit. AV1 can't come soon enough.
Jace Gray
Behead the unbelievers 10-bit is real
Landon Reyes
HEVC 10bit x265 4:4:4 is the way of the future.
Joshua Miller
Shut up nerd. Here's my dongle stick. Give me a copy of Naruto or I'll give you a wedgie.
Juan Garcia
based Mongol
Gabriel Rivera
this
Robert Lopez
>stop being retarded Practice what you preach.
Jayden Sanchez
Hopefully h264 10-bit will die soon, but the damage is done. So many chink cartoons that can't be decoded by any smart TV until more powerful chips come out that will be able to decode it by software.
Carter King
>If the source video is not 10-bit, do not encode it as 10-bit.
ITT, let's summon DAIZ:
DAAAAAIIIIZZZZZZZ YOU MOTHERFUCKER I HAVE TERABYTES OF ANIME THATS NOW USELESS THANKS TO YOU!
Joseph Roberts
It doesn't matter if they are powerful enough for software decoding, it's a matter of if it will ever be supported, at all.
Jacob Green
It's not a matter of power nowadays, but a matter of manufacturers to make/install appropriate soft for decoding into their TVs.
Caleb Robinson
DAAAAAAAAAAAAIIIIIIZZZZZZZZZZZZZZZZZZZZ
Luis Green
Yeah, that's just what I said.
Also, DAAAAAAAIZZZZZZ
Ayden Adams
h264 10bit anime was a mistake
Blake Sanchez
>10-bit >not 16-bit It's like you enjoy looking at floating squares.
Christopher Gomez
>You can't read >They're comparing H.264 8 bit to H.264 10 bit From the company's website: "Pierre commented: “With our 10-bit 4:2:2 solution, HD content can be compressed to just 30Mbps – compared to 50-60Mbps with current MPEG-2 solutions – offering 2 HD streams over DVB-S2 link rather than just one, and with improved quality. Ironically 10-bit 4:2:2 solution when applied to H.264 offers greater bandwidth efficiency than 8 bits." They're comparing H264 to MPEG-2.
What's even more laughable is that everyone who supports 10-bit compression of 8-bit sources either quotes a commercial published by a company that sells 10-bit encoders (this PDF) or a guy who compressed MPEG-2 and dithered it and found that the filesize of HEVC is smaller than MPEG-2 (the second Google result)
Jose Cooper
That's great and all, but it doesn't improve the video quality.
James Sanchez
The guy who designed the Raspberry Pi GPU totally shit all over 10bit h264, and that's all I need to know.
Angel Hernandez
Android TV has players capable of decoding by software, but the hardware just can't handle it yet.
Andrew Wilson
>underpowered gpu >hurr why can't it decode 10bit I can't believe it's 2018 and people still care about garbage like DXVA2 hardware decoding when making "good" encodes. Next thing you know people follow the Blu-ray encode spec that limits the shit out of H.264 even though it's not even intended for disc.
Christopher Reed
Why can't my DVD player play UHD discs?
Juan Anderson
Because you're a poorfag who still has a dvd player in 2k+18.
Jack Williams
>Most monitors are 8-bit >Hurr durr, let's encode everything in 10-bit
Just do 8-bit and enable debanding in your media player.
Anthony Jenkins
Newer Snapdragon chips can decode 10-bit HEVC, just not 10-bit H.264, which is kind of weird but whatever.
Most software however doesn't support it. MX Player's HW+ can leverage HW decoding of HEVC 10-bit.
Oliver Long
10 bit is bigger then 8 bit retard. Of course its better. Once again OP is a fag.
Daniel Rivera
Videos are 24 bit. Duh. Maybe in the 90s there were some 8 bit.
Juan Lopez
Smaller bit rates means less placebo effect.
Michael Green
10-bit per component (RGB) so it's 30-bit.
16-bit per component would be 48-bit.
Gavin Harris
>If they're fucked at the source, they'll still be fucked at 10-bit. >what is debanding
Dominic Miller
You're fucking dumb. The reason why using 10-bit vs 8-bit video encoding is because most of the time encoding is done to reduce file size (ie blu-ray mux to yify rip).
Lets say we're encoding an 8-bit H264 30Mbps 1080p blu-ray mux to a ~2GB 1080p HEVC yify size rip. A 10-bit encoder will have 1024 levels of precision in RGB but an 8-bit encoder will have 256 levels of precision RGB.
Lossy video encoding is not perfect, errors are made at low bit-rates which result in heavy artifacts and color banding. It's a trade off we made for having smaller file sizes but a significant portion of these error can be corrected when you give the encoder more colors to choose from.
Lets make colors a percentage, encoding would look something like this:
>showing the effects of 10 bit colors with a png image
Joseph Long
Well, yeah. 10-bit HEVC can be decoded by hardware, it's standard.
Bentley Diaz
Bat with golden ears here listening to 24-bit 96 kHz FLAC vinyl rips and watching 10 bit encodes of 8 bit sources feeling incredibly smug 24/7 :)) AMA
Cameron Carter
>what is lossy encoder color precision stupid little shit
Ryan Cruz
His argument was that it is doable to make even the raspberry pi able to do 10bit h264 hw decoding, alongside the standard h264 decoding it already does. But it is horribly inefficient, and one of the worst forms of h264 to hw decode. This is why nobody bothers to do it, and nobody except chinese cartoon subbers adopted the format while anyone else skipped straight to h265.
>No, the Raspi does not support 10 bit colour depth. Mainly because it's a waste of time - there are no perceptible advantages to it, and it requires HW that runs twice as fast for just a 2 bit increase in depth.
>And they are all wrong. It's a waste of time. They come under the same heading as people who buy Monster Audio cables - no perceptible difference. >Bitrate, fps and other quality settings massively override any colour depth 'improvements'. As does the quality and bit depth of the device you are viewing on.
>Hi10P will never be supported, for one, hardly anyone uses it, and it requires double the bitrate of normal H264( because 10bits needs to be bumped up to 16 to get it on a byte boundary to make hardware decode efficient). I really don't know why people use 10bit, they seem to be under the misconception it provides a better viewing experience - up the bit rate instead, much more effective. It's not supported on any TV's, Bluray doesn't feel the need to use it, no broadcast HD uses it.
>inb4 hurrdurr raspberry pi I'll take the word of an actual GPU engineer over anyone from Sup Forums.
Caleb Adams
Oh hey literal retard.
Jeremiah Foster
holy shit I had no idea pi users were such mongs
Dylan Cook
>8 bit source >10 bit encode
Carter Cruz
I think the biggest point you're all missing in this thread is where 10-bit encoding is actually beneficial. At high bitrates you won't able to tell 10-bit from 8-bit video. However low bitrate rips will benefit from 10-bit because the encoder is more color "accurate" which will reduce some artifacts and significantly reduce color banding.
Yes even if the source is 8-bit you will still get better video quality at the same bitrate BUT only if you're encoding from a high-bitrate 8-bit encode to a low-bitrate 10-bit encode.
Jose Price
Yet hardware HEVC decode supports 10/12 bit, so by the same logic why support anything other than 8 bit if it's at a byte bounday? HEVC saw the benefit of additional bits for more accurate color preservation at the same bandwidth (as even Blu-ray has its bitrate limits compared to master files) though 10 bit was already available in H.264. Guarantee you that HEVC 10+ bit h/w decode is much more taxing than getting a 10 bit H.264 h/w decode implementation.
Robert Howard
Daily reminder that the ultimate experience is 4K file on a 1080p panel. Only patricians know why is this the case.
Kayden Evans
This smart pupper. He know that you can't add 2 extra bits from a bitless source but the quality won't degrade at least which is what accounts for peoples perception of improved graphics.
Charles Peterson
Then why do 10-bit rips have less color banding then 8-bit rips?
Jordan Jones
That only matters IF THE SOURCE IS 10-BIT.
Encoding an 8-bit source as 10-bit won't magically make the gradients smoother, you retarded buttfuck.
Connor Brown
What is a debanding filter? [spoiler]Hint: it's something used in every half-way decent anime encode[/spoiler]
Jayden Diaz
>That only matters IF THE SOURCE IS 10-BIT. nope
>Encoding an 8-bit source as 10-bit won't magically make the gradients smoother, you retarded buttfuck. I never claimed that you cum drizzle. Encoding a high-bitrate 8-bit source to a low-bitrate rip you have 2 options 8-bit or 10-bit rip.
When you use the 8-bit encoder you will have higher rounding errors (ie less accurate colors, motion estimation) because the lossy encoder can only chose from 256 values per color channel. Since the encoder is literally guestimating the color to reduce file size this means it will almost always choose a value higher or lower than it was supposed to.
When you use a 10-bit encoder you will have lower rounding errors (ie more accurate colors, motion estimation) because the lossy encoder can now suddenly choose from 1024 values per color channel. Since the encoder is again guesstimating the color to reduce file size this means it will choose a color closer to the original one.
HOWEVER this means jack shit when you do high-bitrate encodes from high-bitrate 8-bit sources and 8-bit video in this case will literally look the same. But that's not how the real world fucking works, it's always a high-bitrate 8-bit source being encoded into a low-bitrate 8/10-bit rip.
Tyler Baker
>You can't read >They're comparing H.264 8 bit to H.264 10 bit From the company's website: "Pierre commented: “With our 10-bit 4:2:2 solution, HD content can be compressed to just 30Mbps – compared to 50-60Mbps with current MPEG-2 solutions – offering 2 HD streams over DVB-S2 link rather than just one, and with improved quality. Ironically 10-bit 4:2:2 solution when applied to H.264 offers greater bandwidth efficiency than 8 bits." They're comparing H264 to MPEG-2.
What's even more laughable is that everyone who supports 10-bit compression of 8-bit sources either quotes a commercial published by a company that sells 10-bit encoders (this PDF) or a guy who compressed MPEG-2 and dithered it and found that the filesize of HEVC is smaller than MPEG-2 (the second Google result)
They don't. The image to the left is a 16-bit image of an 8-bit gradient (I recommend playing with your display's brightness until you properly see the four bands). As long as the source is 8-bit, no matter how many bits you add, the result will remain banded in the same intervals. Encoding it as 16-bit doesn't suddenly add 65280 bands to a full gradient, or in this case, split every band into 256 bands.
Noah Ramirez
>When you use a 10-bit encoder you will have lower rounding errors (ie more accurate colors, motion estimation) because the lossy encoder can now suddenly choose from 1024 values per color channel. No you retard because the source is 8-bit so there are only 256 colors in the source and there are no intermediate colors to round to.
Henry Anderson
Except they do. Why the fuck are you spamming a PNG image to compare LOSSY VIDEO ENCODING?
Why not get a blu-ray mux and encode it to 1mbps in with an 8-bit and 10-bit x265 encoder you retarded piece of shit?
Hunter White
It's like upscaling with interpolation and then saying just look how smooth the pixels are now
Dominic Cooper
>What is a debanding filter? >[spoiler]Hint: it's something used in every half-way decent anime encode[/spoiler]
A debanding filter simply adds noise to the video to mask the banding. It only makes the video grainier. If you have any source video available I highly recommend you compare it do a "debanding filter" encoded video, and see that the "debanded" version just has noise (grain) all over it.
Dithering and filters can't fix the source.
Angel Baker
You can't read >With our 10-bit 4:2:2 solution, HD content can be compressed to just 30Mbps – compared to 50-60Mbps with current MPEG-2 solutions – offering 2 HD streams over DVB-S2 link rather than just one, and with improved quality This is comparing 10-bit H.264 to MPEG-2
>Ironically 10-bit 4:2:2 solution when applied to H.264 offers greater bandwidth efficiency than 8 bits. This is comparing 10-bit H.264 to 8-bit H.264
Luis Barnes
i don't know
Ian Rivera
NEGRO when color dithering is performed on an 8-bit display from a 10-bit source you will literally get a closer representation of the original color because the encoder had higher color precision.
If the original red color value in a pixel was say 200, 8-bit low-bitrate encoding would guestimate it as something like 197 or 205. BUT when 10-bit low-bitrate encoding is used that color now gets turned into a 10-bit value of 795-805~ which when dithered back to 8-bit on the display results in a value of 199-201 THUS you now have less color banding.
Daniel Wood
>Why not get a blu-ray mux and encode it to 1mbps in with an 8-bit and 10-bit x265 encoder you retarded piece of shit? I have, actually. With the same settings (except bit depth) you get exactly the same video quality, since the algorithm doesn't magically change from 8-bit to 10-bit. If it's the same algorithm it will work exactly the same on the same input.
Zachary Ramirez
>on an 8-bit display from a 10-bit source I am talking about the opposite, retard.
Dithering from high bit-depth to low bit-depth: good, preserves information.
Dithering from low bit-depth to high bit-depth: bad, removes information (by adding noise)
Pic related is an 8-bit image reduced to 1-bit, dithered. The dither preserves information. However, converting a 1-bit image to 8-bit and adding dither only removes information.
Liam Nguyen
>I have, actually. With the same settings (except bit depth) Then why are you fucking talking to me you literal fucking retard. Way to shit up this thread with info of use to fucking no one.
Ayden Carter
why i need all that shit in my primary colors only chinese cartoon?
Hunter Clark
Because people like you keep spreading disinformation that 10-bit improves image quality of 8-bit sources, like fucking retards. You're wrong, and you need to stop spreading your wrongness, cockmongler.
Brayden Gomez
>Dithering from low bit-depth to high bit-depth: bad, removes information (by adding noise) But doesn't add enough when you convert 8-bit source to 10-bit source. The whole point is making the lossy encoder more precise so in the end it will give you a closer representation of the original color.
Higher encoder precision, that's literally what this is all about.
Noah Butler
Isn't the internet great. It allows shitheads like yourself to say shit that would, in real life get your head cracked open. Hopefully you'll suffer the same fate fucking cunt. Please turn to the loaded gun in your drawer, put it in your mouth, and pull the trigger, blowing your brains out. You'll be doing the whole world a favor. Shitbag. I would love to smash your face in until it no longer resembled anything human, faggot. Die painfully okay? Preferably by getting crushed to death in a garbage compactor, by getting your face cut to ribbons with a pocketknife, your head cracked open with a baseball bat, your stomach sliced open and your entrails spilled out, and your eyeballs ripped out of their sockets. Fucking bitch I would love to kick you hard in the face, breaking it. Then I'd cut your stomach open with a chainsaw, exposing your intestines. Then I'd cut your windpipe in two with a box cutter. Hopefully you'll get what's coming to you. Fucking bitch I really hope that you get curb-stomped. It'd be hilarious to see you begging for help, and then someone stomps on the back of your head, leaving you to die in horrible, agonizing pain. Faggot Shut the fuck up faggot, before you get your face bashed in and cut to ribbons, and your throat slit. You're dead if I ever meet you in real life, fucker. I'll fucking kill you. I would love to fucking send your fucking useless ass to the hospital in intensive care, fighting for your worthless life. I wish you a truly painful, bloody, gory, and agonizing death, cunt.
Jeremiah Gomez
Absolutely >It's like encoding an 8-bit MP3 as a 16-bit MP3 Everyone knows you can only improve quality by encoding to FLAC
Blake Rivera
>If the original red color value in a pixel was say 200, 8-bit low-bitrate encoding would guestimate it as something like 197 or 205. BUT when 10-bit low-bitrate encoding is used that color now gets turned into a 10-bit value of 795-805~ which when dithered back to 8-bit on the display results in a value of 199-201 THUS you now have less color banding. Are you retarded? That's not how it works.
If the source info is [0, 63, 127, 255] and you encode it, losslessly or not, no matter how much rounding you do, your best (lossless) output is [0, 63, 127, 255]. If you multiply these numbers by 4 to make them 10-bit you get [0, 252, 508, 1020] which is exactly as accurate as the original. If your bitrate is limited and you can only estimate the colors, then both the 8-bit estimate and the 10-bit estimate would be off by the same amount because you're using the same algorithm. The algorithm doesn't get any more accurate at 10 bit if the source is 8 bit. This is why that retarded PDF is comparing MPEG-2 to h264, because obviously h264 is more accurate at lower bit rates than MPEG-2, but that's because it's a different algorithm.
If you compare the same algorithm at different bit depths, it would have exactly the same output. If that were the case I would be fine with 10-bit encodes of 8-bit content, but as retard #2 here shows, if you're dumb enough to encode 8-bit-depth content in 10-bit-depth, you're also retarded enough to run filters and dithers on it that reduce the video quality.
Caleb Stewart
>you convert 8-bit source to 10-bit source You can't convert an 8-bit source to a 10-bit source. An 8-bit source is forever 8-bit, like an 8-bit-depth MP3 is forever 8-bit-depth even if you encode it as a 16-bit-depth MP3.
Nathan Rivera
>Isn't the internet great. It allows shitheads like yourself to say shit that would, in real life get your head cracked open. It's great and it's a problem because it allows retards like you and other people who encode 8-bit sources in 10 bits to keep spreading their bullshittery with no repercussions.
Ryan Hughes
Except it does, having more values to choose from means lower rounding errors. This is why supercomputers do FP64 OPS for complex math instead of FP32 OPS which GPUs can do quickly.
>multiply by 4 Gosh that was hard...
Isaiah Phillips
Rather than posting screenshots, use some video files. Here is all you need to make the comparison (requires avisynth pre-installed): my.mixtape.moe/yhkrsl.zip
With this, I converted 180 duplicate png image files to a 10 seconds long still-image clip, in x264 lossless 8-bit and then converted it to x265 8 kb/s 8-bit and 12-bit. I left the sample clips in the archive, have fun.
Asher Hill
>Except it does, having more values to choose from means lower rounding errors No, you fucking retard, you fail at basic math.
You have 4 bytes per second to represent this 1-second 8-bit video: [0, 63, 127, 255] (each number is a frame). Ignoring the container and only looking at the frame data:
0000000 00111111 01111111 11111111
Now you have 4 bytes per second to represent this 1-second 10-bit video: [0, 252, 508, 1020]. Again, ignoring the container and only looking at the data: 000000000 0011111100 0111111100 1111111100
But that's more than 4 bytes, so our smart algorithm uses big-endian encoding and gets: 0000000 00111111 01111111 11111111
Which is exactly the same as the 8-bit encoder. Now, if you had only 3 bytes, you'll get, with either encoder: 00000 001111 011111 111111
And so on. Rounding DOES NOT GET ANY ACCURATE WITH HIGHER BIT DEPTH. Rounding gets more accurate with a higher bit RATE.
I don't know how you got it in your head that this explanation is correct. The "guestimate" part is completely wrong.
I like ducks_take_off. Feel free to encode it with the same algorithm, simply changing the bit depth from 8 to 10.
Nicholas Jones
>600+ MB I'm on capped internet, I can't download any of those. Why 10-bit when x265 supports already 12-bit though?
Benjamin Hernandez
Because anime retards like encoding 8-bit sources in 10-bit even though at best it adds nothing and at worse they completely fuck up the encode with filters and dithers.
Joshua Hughes
It's lossy encoding so you end up averaging across a block. Source dither (the appearance of a smooth gradient) gets reduced more and more to its closest solid 8 bit value across the whole block, introducing obvious banding at the transitions from a source that had none.
Owen Hill
Again, this has to do with bit RATE, not bit DEPTH. Let's take the same data and pretend it's a single block: [0, 63, 127, 255]. Let's say our bitrate is 1 byte (8 bits) per block. The average is 111.25, and of course our 8-bit encoder crams it into 1 byte as: 01101111.
Now let's take a look at our 8-bit-source-converted-to-10-bit. We still have the same bitRATE, 1 byte per block. The average of our 10-bit block [0, 252, 508, 1020] is 445. Our 10-bit encoder crams 0110111101 into 1 byte as... wait for it...: 01101111 ...which is EXACTLY THE SAME SINCE THE SOURCE WAS 8-BIT.
Josiah Nguyen
"But what if it had more than one byte?" You ask. In that case, instead of averaging four pixels into one, it would maybe average two and two and split the data between them, so the 8-bit and 10-bit would still produce the exact same numbers. We keep adding more and more bitRATE untill we get to lossless compression and then it's trivial to see that an 8-bit source encoded as a 10-bit output will have NO BENEFIT at all.
Dominic Ortiz
Take four 8x8 blocks. The first is solid 255. The second, every 4th pixel has been replaced by 254. The third, every 2nd pixel is 254. The third, 3 out of 4 pixels are 254. The 4th is solid 254. You have one value with which to describe each block. Go.
Dylan Flores
And of course, encoders don't compress raw pixel data, they compress an inverse quantisation and transform of the pixel data (and other data such as motion data and other estimates), but the principle is exactly the same.
Anthony Walker
>You have one value with which to describe each block. Go. You didn't specify the bitrate or the algorithm or anything:
Spoiler: for the same algorithm with the same bitrate, the bit depth does't matter as long as it's equal or higher than the original.
Andrew Ramirez
HEVC is supported by the latest computer GPUs, CPUs (Intel's, anyway) and has been standard in mobile SOCs so long it's a given that it's supported by a phone or a tablet.
I have two devices that don't; a Huawie y360 4" phone from 2015 and a AMD Athlon 5350 HTPC. Everything else and everything sold today has HEVC hardware decoding.
This is why it matters.
..but my VP9? VC1? Don't care, come back when there's hardware support. There are some mobile SOCs with VP9 now, but it's not standard. The stupid part is that GPUs don't support hardware decoding. That's odd, supporting it seems like the obvious choice.
Levi Phillips
>Our 10-bit encoder crams 0110111101 into 1 byte You didn't account for the passive lossless compression though, won't that make the difference? For example, every 10 strings of 10 bits, you've 20 more extra bits with a 10-bit encoder, several instances of this will make possible for the encoder to retain more information, using dynamic compression crossing several arrays of 10 bit strings. This of course works well only when you're reducing the bitrate significantly, which is the reason why we had the 2011 miniMKV revolution that made 10-bit the standard, when it comes to anime scene at least.
Jackson Roberts
>the bit depth does't matter Yet here you are stuck with the fact you can't compress those intermediate blocks above to anything but 255 or 254 in 8 bit, while 10 bit has 1019, 1018, and 1017.
Jason Rogers
>you gain nothing by encoding 8-bit video into 10-bit That's were you're wrong kid, not only the file size gets significantly smaller at the same CRF but you can only apply dithering to the video using vapoursynth up to 16-bit. You also get less banding.
Lincoln Jackson
>using discs >"poorfag" lol what kind of stupidity is this. Never had a DVD player, don't have a BlueRay player and I never will.
do feel free to also call people who don't have a fax machine "poorfags".
Luis Morgan
>You didn't account for the passive lossless compression though, won't that make the difference? See the example here Into 3 bytes: 000000 001111 011111 111111
Banding, if it doesn't exist in the source, depends on the bit RATE. If it DOES exist in the source, then you cannot remove it by increasing the bit depth or rate.
>Yet here you are stuck with the fact you can't compress those intermediate blocks above to anything but 255 or 254 in 8 bit, while 10 bit has 1019, 1018, and 1017. Not if the source is 8-bit you fucking retard. Let's explore your example. We have a very convenient [252, 253, 254, 255] block that needs to be compressed into 1 byte. You convert it to 10-bit for some reason, and get a block that is [1008, 1012, 1016, 1020]. Averages 1014, compresses to 1 byte as 11111101. Incidentally the 8-bit value is... 11111101. "A-ha, but what if you had MORE bytes?" Then I would not average four pixels into one, I would average four pixels into two, or three, or four, and... surprise, I would always get the same result if I use the same algorithm, no matter the bit depth. Because if the bit depth is capped from the very beginning to 8 (because the source is 8 bit) I cannot gain any information by converting it to 10 bit.
I really encourage you to use the same algorithm with the same bit rate to encode an 8-bit source as an 8-bit output and a 10-bit output. Maybe you'll have an "a-ha" moment.
>apply dithering kys yourself. Dithering into higher bitrates is NOISE. It REDUCS signal-to-noise, making the video quality WORSE.
Jordan Garcia
>changing it from dither to a random assortment that just happens to average at an 8 bit level. Dither is the approximation of higher bit depth. Guess what you didn't do? Dither. Now try again with the example asked, where an 8 bit source is approximating a 10 bit gradient. Guess what we can do in 10 bit? Describe that gradient without pixel level precision. If we try the same in 8 bit, guess what we get? Banding.
Andrew Robinson
You are wrong.
John Butler
It isn't on Android though, I only know of Kodi and MX Player doing it due to their own decoders and you need a fairly new SoC.