>Well, other than the performance benefits mentioned above, the real difference is that lzip uses a "lossy" compression scheme. Most other file compression utilities use a "lossless" compression scheme, mostly because the lossless algorithms are better understood and simpler mathematically (most programmers take shortcuts, particularly in areas that involve a lot of math).
>This has two side effects. The first is that files compressed with lzip cannot be restored to their original state -- this is the "lossy" in lossy compression. The second is that the performance is vastly improved. Why don't go go back up to question number one and read that second paragraph again. We're talking about a constant-time algorithm that can reduce a file down to 0% of its original size. What's not to like?
>3. What do you mean I can't restore my files? Ha! A common misconception. You can restore your files after they have been compressed with lzip. They just won't be exactly the same as they were before. This makes sense when you think about it; if you lose a lot of weight suddenly, and then put the same weight back on suddenly, you wouldn't expect to be in exactly the same health that you were when you started, would you? Compression is a dramatic process, and dramatic processes often change people. It's no different for your files.
>On the reassuring side, it is important to note that the compression algorithm used by lzip only discards the unimportant data. And if it was unimportant before, what makes it so important now? Huh? In fact, many users may find that compressing their entire file system and then restoring it will be a good way to learn what is truly important.
>It utilizes a two-pass bit-sieve to first remove all unimportant data from the data set. Lzip implements this quiet effectively by eliminating all of the 0's. It then sorts the remaining bits into increasing order, and begins searching for patterns. The number of passes in this search is set to (10-N) in lzip, where N is the numeric command-line argument we've been telling you about.
>For every pattern of length (10/N) found in the data set, the algorithm makes a mark in its hash table. By keeping the hash table small, we can reduce memory overhead. Lzip uses a two-entry hash table. Then data in this table is then plotted in three dimensions, and a discrete cosine transform transforms it into frequency and amplitude data. This data is filtered for sounds that are beyond the range of the human ear, and the result is transformed back (via an indiscrete cosine) into the hash table, in random order.
>Take each pattern in the original data set, XOR it with the log of it's entry in the new hash table, then shuffle each byte two positions to the left and you're done!
Adam Rogers
>lossy file compression top kek, no thanks.
Mason Perry
>lossy
Xavier Cook
>lossy compression. I don't get it.
Ian Nelson
Ironically the logo is in PNG format.
Samuel Garcia
...
Ethan Allen
I accidentally made a lossy compressor. It could compress Finnegans Wake to 10KiB. Decompression resulted in a file that was totally indistinguishable from the input.
Brody Cox
What would happen if I were to compress a lossy format, say, an mp3? Would the mp3b loose data twice as fast? Will I be left with mp3s with 1 b/s bit rate?
Ayden Garcia
Well, that sounds like an extremely efficient way to fuck a file's shit up.
Owen Hill
>not taking in consideration rotational velocidensity Heh, scrubs They should leave lossy compresion to audiophiles
>This software may be distributed, in whole or in part, in sickness or in health, for better or for worse, forsaking all others, for as long as we both shall live. Upon such occasion as deemed appropriate by the laws of this state and the Attorney General, or the General Attorney, or the King of Pop, you will transferred to a correctional facility until such time as you will be put to death.
>Should the creators of this product fail to uphold their duties as spelled out in this contract, they shall forfeit the right to their crown, and the contestant ranked as First Alternate shall be named creator. Should no first alternate exist, the position will be determined by a two-thirds majority vote in the Senate. All disputes will be settled by force.
Jayden Kelly
And someone will definitely fall for the meme this time too.
Design Principles >Achieve maximal, lossless compression ratios for test images. >Maintain backward and forward compatible support for all forms of inferior image compression algorithm.
Analysis >Applying the standard test of the effectiveness of an image compression algorithm (i.e. Lenna), LenPEG compresses the image into a file of minimal size: one bit. The compression ratio is a staggering 6,291,456 to one. Note also that this is completely lossless compression. This far surpasses compression ratios achieved by any other algorithm. >On any other images (which is really moot, since we have already demonstrated LenPEG's efficiency on the accepted standard test for image compression), LenPEG is only a single bit worse than the best competing algorithm. This is a piddling difference in image files which are often megabytes in size, so is not even worth mentioning. >The result is clear: LenPEG is by a huge margin the most efficient image compression algorithm ever invented, and cannot possibly be exceeded by any future algorithm.
Landon King
how do we beat the pigeonhole principle to achieve real compression without collisions?
Ryan Perez
XZ my friends.
Hunter Roberts
Hey, check out my ultra fast performance lossy compression algorithm >mkdir
Cooper Sanchez
This sounds like a sophisticated way to troll people into basically ruining their files.
Camden Cook
Check out mine: >sudo mv * /dev/null
Nicholas Martinez
what is humor
Asher Martinez
A good fart joke
Tyler Campbell
I like your style
William Williams
>not storing your data in pi so your data takes no space filthy casuals github.com/philipl/pifs