Can you make a file infinitely small by compressing it several times in winrar?
Can you make a file infinitely small by compressing it several times in winrar?
Other urls found in this thread:
youtube.com
en.m.wikipedia.org
twitter.com
>can you make something dissapear by squeezing it several times?
Maybe if your compression system was a black hole
You can compress a file of several gigabytes into only single byte if you have the right decryption key
Yes.
Undoing the compression, no.
You can try, but a guy named Nyquist will come to your house and club you in the head to stop you from going over his limit.
If you use a compression algorithm based on pi or e or another irrational number you can reduce any file to an incredibly small size. Literally just two pointers, one to where the file begins in pi/e/whatever and one to where it ends.
check out the hadron collider OP
is this what you're referring to?
>youtube.com
That's retarded. You have to demonstrate that the pointer will not be larger than the file itself.
Encryption/decryption does not change filesize.
if your key is as big as your file (on average)
Was sorta wondering something like this since I was trying to email myself a 37mb recording I took of an informal document as insurance, but hotmail only lets me email myself 5mb max or something. No point in really using mega since if I'm out of internet for 3 months they'll wipe everything from inactivity IIRC. I thought I read of a .rar that is small but contains petabytes of data, but compressing the video in 7-zip saved maybe 2mb if that. It's looking hopeless brehs.
Few years ago I saw a program with incredible compression algorithm which could compress a few GB software to couple hundred MB... It took long time to uncompress though, we talking hours here... I think some russian guy made it...
Anyone knows what I'm talking about?
it depends on how information dense your file is. If your file consists of just one trillion 1's I could compress it by stating one trillion times 1. And you can decompress it by generating one trillion ones
uharc?
what the fuck are you talking about
Why are you here
first pass of compression already removes a lot of redundancy. if there's no redundancy left to remove, compressing it again will just introduce more redundancy by generating another header data
This should have been the first answer in the thread.
You should all feel ashamed for not knowing something as basic as this.
>the entire universe gets squeezed into a singularity
>can't fucking compress a file to shit
You retarded humans are dumb as shit.
multi part archives were invented for problems like this back when the 1.44mb floppy was the height of removeable disk technology.
Oh, wow! This will definitely be useful to me, thanks user.
think about what you said
That does not make sense if you know what a byte is
rm $yourfile
Compresses your file to literally zero bytes by sending it to a special memory section.
just use scientific notation nigga
google 42.zip
if you could pass a compression standard that compresses, say, 256 games into one of 256 one byte words, then every computer that supported that standard would be able to recognize the word and decompress it to the correct game. Of course, this would imply that the computer would need to have every game stored somewhere to be able to support the compression standard, but that's beside the point.
What you described is just a library that references based on one word and need these location of the games stored any way.
yeah, but if the files are included within the compression algorithm program itself, they'd be part of the file compression protocol and still essentially be compression. I mean, it would definitely be possible to compress something several GBs large down to a single byte, but alot of other files would have to be made larger by that program for it to be of any use, and then what's the point?
Infinity is imaginary. It is an abstract concept. It is neither a natural or real number therefore your proposition is false.
Then you have to keep a large value of pie possibly bigger than original file
Nigga it's just 0s and 1s why can't we compress that shit better
I made a pi-based compression tool, so far I could theoretically compress anything up to ~2GB to 8 bytes, but even "compressing" 2 bytes to 8 bytes takes minutes to hours unless you're really lucky. I might port this to CUDA.
jej
>archieve a jpeg
>it gets bigger
You're an idiot.
I know
>winrar = propreitary babies
>7zip = foss chads
7zip is also far greater compression
That's not possible, you probably just read the pifs github readme and thought you knew how it worked.
>63227976
b-b-b-buh muh steam Sup Forums script kitties group told me...
after the first compression no matter how many times you try to compress it with the same algorythm the size wont decrease (or even encrease)
you are either a retard or the most clever person to ever exist. we are talinking 100x einstein
Even if infinity wasn't what you're describing it as, file size is a discrete value, and infinitely small is not applicable to it.
That's definitely false, depends on the algorithm. With LZSS, it is entirely possible to think of a buffer of data that can be compressed twice, each time resulting in smaller size.
He basically said have a 1 byte key and a several gigabyte size file but call the file the key and the key the file.
Encryption does not change file size...
Oh look a Jan Sloot wannabe
This reminds me of some compression algorithm that was supposed to change the world in 2001. They managed to compress a full rip of The Matrix (about 800mb) small enough to fit on a single 1.44mb floppy. Then some shit about it being stolen, and lawsuits, etc and it never happened. I once contacted the guys back in 06 and they claim to have actually *lost* it. Like "oops, i totally misplaced this thing that can compress almost 1gb into just over 1mb, but hey it's cool"... If it were me, I'd have had hundreds of backups stashed.
If I hadn't seen a demo of it with my own eyes, I'd have assumed it was all just snake oil bullshit, but this thing was fucking real. Now it's just gone. Sounds like CIA niggers to me.
First of all, yes, my implementation is based on pifs. I stole an implementation of the BBP algorithm with a simple function that returns a byte according to an index in the digits of pi. Here's how the entire thing works: the program takes a string and iterates over all the bytes of pi, trying to find a byte that matches the first byte of the string. If the string has a length of 1, the program is done. If it's longer, it checks if all the following bytes match as well. It starts checking for the start byte again if a byte doesn't match. It completes once all bytes match. The 8 bytes of compressed output data are 4 bytes for an int representing the start index and another 4 for an int representing the length of the input string.
Shut the fuck up. You can't go around telling people about the "rotational memory" (rm) command. If people found out that GNU+Linux has a tool that uses the speed of rotational storage combined with the speed of volatile storage like RAM to compress files then the CIA will take away our GNU. You're putting saint IGNUtius in danger you fuck.
So your plan is to calculate pi (or e) with a lot of digits, then match your data as well as possible to some encoding of a digit sequence, then store the pointer to the start / end of the digit sequence as a compression tool?
Even if you could calculate pi really fast, you still have to store this somewhere, either on memory or on disk.
Obviously you can trash this data on the machine that unpacks it, but you are setting up huge requirements on the machine that compresses it.
And even then, you have no idea when the pattern you search for will be matched, so you can't even make a standard for the pointer size.
As for calculating pi, time it for your own system and see how long it takes.
Lets just take 10000 digits:
time echo "scale=10000; 4*a(1)" | bc -l -q
I hope your data chunk is within some of the first billion digits.
zpaq?
maybe they didn't have enough storage space to take backups
Oh fuck you. I laughed.
precomp + uharc or freearc. i used to compress games from gb to mb when i was a kid cuz i was also a poorfag and could afform more storage space. oh man, those were great times.
>put my hard drives near a black hole's orbit to compress shitloads of doujinshi and hentai
yes it does
BBP doesn't need to store the previous digits to calculate the next one. Its time requirement goes up linearly for each digit, but its memory requirement only goes up logarithmically, so the only real requirement to make this happen is a really fast processor, or multiple since this process can be easily parallelized. Volta pls
Explain.
yeah but the index might (most liekly) be more than 4bytes much more possible even larger than th original file
It's impossible to find 2GB of specific data in 8 bytes of pi.
No but you can store all your data in the filenames and save disk space.
>thinking black holes disobey thermodynamics
are you retarded or illiterate?
>The 8 bytes of compressed output data are 4 bytes for an int representing the start index and another 4 for an int representing the length of the input string.
kek
Unfortunately the decryption key and necessary environment would have to equal the size of the file intended. Unles of course, it was a file consisting of a single byte repeated billions of times.
Are YOU retarded? Do you honestly think you can find 2GB of data within a field of 4 bytes? I'd be surprised if you could find this post in UTF-8 with 4 bytes.
>within a field of 4 bytes
>4 bytes for an int representing the start index
>4 for an int representing the length of the input string
You're doing a good job convincing me you are illiterate. Or do you not know what the term "length" means?
Let x be the 4 byte decimal expression of the index.
Let y be the 4 byte decimal expression of the length.
x = 0
y = 2,147,483,647
Now tell me how much data is stored in this range, user?
Take 2,147,483,647 digits of pi and find for me this sentence in UTF-8. I'm waiting.
Good job immediately moving the goalposts once it was demonstrated that you're incapable of basic reading comprehension.
You can't, that's what I thought. Thanks, I think we're done here.
you can compress anything to 0 bytes with a lossy compression.
Truly impressive level of delusion. How does it feel being so insecure you can't admit when you were wrong even through the veil of anonymity?
>It's impossible to find 2GB of specific data in 8 bytes of pi.
Lmao.
>he still can't prove it (because it's impossible)
Just give up, kid.
To show how absurd it is to look for an [index, length] pair of a matching substring in pi or e or any other irrational expansion, consider some more convenient irrational number than pi or e. Like an irrational number that consist only of 1 and 0 in its (infinite) base 10 (or base 2, or base whatever) expansion, in which you can find any substring consisting of 0s and 1s.
Such a number can be constructed by concatenating every possible string made of one character, two characters, three characters etc.
So it could look like this binary irrational number (i put ` to separate substrings): 0.0`1`00`01`10`11`000`001`010`011`100`101`110`111`0000`... and so on.
As you can see, to encode a 3 bit number you need between log2(1*2^1 + 2*2^2 + 1) and log2(1*2^1 + 2*2^2 + 3*2^3) bits for an index and at least log2(3) bits for a length. So you need at least ~5.04 bits to encode 3 bit number. You could actually manage without the length in the case of this number, but that's stil at least ~3.46 bits to encode 3 bits.
To encode your index you need at least (you need more, but it's enough to show you need at least this much):
log2( (n-1)*2^(n-1) ) = log2(n-1) + log2( 2^(n-1) ) = log2(n-1) + n - 1
bits, where n is the number of bits you want to encode. Which means:
log2(n-1) + n - 1 > n, always for n>3
so you will always need more bits to encode the index than the data itself.
Sure, you can pick some irrational number which contains a specific substring you want to encode close enough to the decimal point so that the index takes up less bits to encode than the information you want to encode, but then you'll need to store that specific irrational number somewhere which again will make it take more space than the information itself.
Actually, iir there is a genuine compression algorithm working on a similar principle called arithmetic encoding.
It'll become bigger actually.
Theoretically, some combination of file and key COULD produce a well-compressable output, but that simply can't be done reliably.
Why doesn't Sup Forums talk more often about compression and encoding? It's such an interesting topic.
What do you think compression is user?
Well put user
Start the threads yourself
Pigeonhole principle
I remember something called kgb that was used for high compression like 10 years ago. There was also nanozip, freearc, and PAQ.
Why take pi and not a purposely equalized irrational number?
/thread
Well if you have the right key, you can get it 1 byte small. But then the whole file must be contained in the key
For example, you generate random byte sequence after random byte sequence, starting with 0, then 01, then 10, etc...
Teh problem is that your pointed will need to be exactly as big as the byte sequence to generate. If you see all those numbers as one irrational number 00110 (for the digits I just calculated, being 0, 01 and 10) it can be translated to pi in this case because that nonrepeating digit system you are basing it from.
So, this essentially means the key, so the pointer to the index, is on average exactly as long as your byte sequence in pi, which dispoves your concept.
Your efforts are worthless.
Anyone remember that gba movie thing?
Your post has ~170 characters which only translates to ~1700 digits assuming you convert to decimal and keep leading zeros. If those digits started at the second digit of an irrational number, it could be represented by 0x00000001 0x0000064A. Is it likely that the exact sequence of digits will appear within the first 4294967295 digits of pi? Probably not, but it is POSSIBLE.
We would only need a infinitely accurate analog pointer device to build a dedicated compressing processor! Also a quantum computer to find the start of the sequence real fast!
Too bad the reality is quantized.
Something that does not use decryption keys.
Well you're wrong, because compression does use a public key, you can google public key encryption if you'd like to learn more.
You are confusing compression and encryption.
Sonny, I've been doing this longer than it sounds like you're alive. Go play some videogames.
Empty words.
Nope. You can only compress something once. Google "entropy" and "information theory"
Try and compress a jpeg and watch the file size
Compression doesn't use a public key lool
Yes but you have to change the extension to .txt every time before you compress it, to trick winter into thinking that it is a text file, which can be compressed much better than anything else because basically you just make the font size smaller and smaller thus taking up less and less area on the virtual paper.
Well, if it is not, then it's not possible. That's the problem. That kind of works just the other way. You have to browse through first digits of pi or some other irrational number and check if the digits can be decoded to anything meaningful. I don't bother to calulate what's the probabilities to find anything, but that for sure is low.
That kind of reminds me of the way how I understood the ending of the 2001: Space Odyssey (book).
Positions of all the stars and planets in the universe are the information that encode the mind and consciousness of the the superior alien. The positions of the objects change and interact to each others so that's the computation from where the mind emerges. Analogous with neurons and human mind.
If you'd encode a state of your human mind in bytes, and if universe would be infinite (which it's not), you might find an area of space that's objects locations could be encoded to a file that represents your state of mind. You'd find your mind backed up with compressed file containing the pointer and the area boundaries.
Did I go a bit overboard?
>You can only compress something once.
That's definitely false, depends on the algorithm. With LZSS, it is entirely possible to think of a buffer of data that can be compressed twice, each time resulting in smaller size.
It is up to you to demonstrate that for files we use, text, images, video, executable code, the size of resulting pointer into pi/e will be smaller than the original file itself.
>It is up to you to demonstrate that for files we use, text, images, video, executable code, the size of resulting pointer into pi/e will be smaller than the original file itself.
That's not a problem, user. We'll just find the chunk of pi that describes a program that performs a lookup from small natural numbers into offsets in pi that contain images of anime girls.