Googolplex - How far can you get ?


Recommended Posts

Just found this little file on one of my storage drives, and remember having a bit of fun with it years ago

The file is a tiny 78kB, extract it using 7zip to find the extracted file is now 21MB, now extract that file and you`ll see that the extracted file is now 2.8GB

How many times can you extract Googolplex before you or your machine gives up ? :D

Apparently there is a file hidden at the very end of all of the extractions but I have never managed to get that far, from what I remember it is a text file containing the entire numerical value of a Googolplex

Download (Extract each file with 7zip)

http://localhostr.co...G/googolplex.gz

Link to comment
Share on other sites

Read the wiki it will show why he did it

Most Anti Virus should not allow you to do this as it "should" pick it up as a zip bomb because of the way it exponentially increases in size

Link to comment
Share on other sites

Read the wiki it will show why he did it

Most Anti Virus should not allow you to do this as it "should" pick it up as a zip bomb because of the way it exponentially increases in size

I did read it, Its not a zip bomb

You don't suddenly get 100 Petabytes trying to decompress, each time you extract the file it gets bigger and offers no harm to the machine at all, it doesn't try to crash your system whatsoever

Link to comment
Share on other sites

I did read it, Its not a zip bomb

You don't suddenly get 100 Petabytes trying to decompress, each time you extract the file it gets bigger and offers no harm to the machine at all, it doesn't try to crash your system whatsoever

Yes, if you manually extract it the file doesn't expand until you initiate the next nested level, but the the same wouldn't be true for an AV program. For the AV it would initiate the next step automatically in order to ensure the contents of the file are safe. As a result, it would continue going down a level until it exploded.

  • Like 1
Link to comment
Share on other sites

Assume a text file with 1 byte per character, the numerical value of a Googolplex would be (1e100) + 1 bytes which is ten duotrigintillion bytes or 9.095e87 terabyes. There isn't enough digital storage on earth to hold this value.

  • Like 1
Link to comment
Share on other sites

Yes, if you manually extract it the file doesn't expand until you initiate the next nested level, but the the same wouldn't be true for an AV program. For the AV it would initiate the next step automatically in order to ensure the contents of the file are safe. As a result, it would continue going down a level until it exploded.

There are no file extensions on any of the nested levels, probably to prevent that happening

Link to comment
Share on other sites

You dont understand whats being said

AV will auto scan the next level will see its also a zip and scan the next level, will see its a zip and scan the next level will see its a zip and scan the next level

it will continue to do this thus is scanning a bigger and bigger file each time it does this

Link to comment
Share on other sites

You dont understand whats being said

AV will auto scan the next level will see its also a zip and scan the next level, will see its a zip and scan the next level will see its a zip and scan the next level

it will continue to do this thus is scanning a bigger and bigger file each time it does this

Funny, every time I ever extracted this, on many machines, with many other people also having a go, not once with different AVs did it cause any problems

Don't like it, don't download it, I wouldn't post any file that would cause problems

Link to comment
Share on other sites

Funny, every time I ever extracted this, on many machines, with many other people also having a go, not once with different AVs did it cause any problems

Don't like it, don't download it, I wouldn't post any file that would cause problems

Then you guys had AV software that's actually worth a cent.

A good AV scanner should always stop after a certain time to not trick itself into zip bombs.

Apparently this isn't standard by now? Lord in heaven...

Glassed Silver:mac

Link to comment
Share on other sites

I just tried expanding the first level (which produced several .zip files) and Windows (SearchFilterHost.exe) started using all my 8GB of RAM.

I had to terminate the Search Indexer process and then delete the .zip files via the command line.

Pretty scary stuff!

Link to comment
Share on other sites

interesting...I've never seen this type of thing before. If I have time later tonight, maybe I'll write a little java program to extract it. It should be easy to extract each layer in memory and just discard each byte as it's read. that way, it wouldn't matter how large the file is. You could just let your cpu churn away. Has anyone tried this just for giggles?

edit: haha what am I talking about. this would never work. I'd run of memory during the extraction. ignore my entire idiotic comment

Link to comment
Share on other sites

Then you guys had AV software that's actually worth a cent.

A good AV scanner should always stop after a certain time to not trick itself into zip bombs.

Apparently this isn't standard by now? Lord in heaven...

Glassed Silver:mac

I`m using NOD32 and it gives the all clear, and as you saw in VirusTotals scan, 0% of all scanners found any issues / zip bomb

Previous AVs I've used include Avast and Avira <<< High FP detection rate too but nothing

I just tried expanding the first level (which produced several .zip files) and Windows (SearchFilterHost.exe) started using all my 8GB of RAM. I had to terminate the Search Indexer process and then delete the .zip files via the command line. Pretty scary stuff!

There is only 1 file per extraction, the first = 21MB only

interesting...I've never seen this type of thing before. If I have time later tonight, maybe I'll write a little java program to extract it. It should be easy to extract each layer in memory and just discard each byte as it's read. that way, it wouldn't matter how large the file is. You could just let your cpu churn away. Has anyone tried this just for giggles?

edit: haha what am I talking about. this would never work. I'd run of memory during the extraction. ignore my entire idiotic comment

I remember when I first found this, someone did something similar to what you are thinking, managed to partially extract all of the layers and found the txt file at the end - which is how I know what is in there

  • Like 2
Link to comment
Share on other sites

Assume a text file with 1 byte per character, the numerical value of a Googolplex would be (1e100) + 1 bytes which is ten duotrigintillion bytes or 9.095e87 terabyes. There isn't enough digital storage on earth to hold this value.

To expand on this, the volume of the Earth is roughly equivalent to the volume occupied by 2.87e24 3.5" hard drives.

Let's say you stored the .txt file across 1 petabyte 3.5" hard drives (don't exist, but pretend they do) you would still need 8.882e84 hard drives, which would take up a volume of space approximately 1e63 times that of earth itself, if my math is correct.

Link to comment
Share on other sites

I remember when I first found this, someone did something similar to what you are thinking, managed to partially extract all of the layers and found the txt file at the end - which is how I know what is in there

yea actually, it looks like it should be possible to do that. Java has a class called ZipInputStream (http://docs.oracle.com/javase/1.4.2/docs/api/java/util/zip/ZipInputStream.html) that can read a stream of bytes and inflate it. He probably had a method to inflate a few bytes into a buffer and then make a recursive call with that buffer. You'd have to make sure there aren't so many recursive calls that you'd get a stack overflow, but eventually, you'd get to the inner most level.

Link to comment
Share on other sites

How is the actual file created since it is so big, and how can it actually be compressed so small?

Does sound interesting. I want to try but I don't have very much drive space left so I wouldn't get too far.

Link to comment
Share on other sites

This topic is now closed to further replies.