Jump to content



Photo

Googolplex - How far can you get ?

googolplex extract huge

  • Please log in to reply
22 replies to this topic

#16 psyko_x

psyko_x

    Neowinian

  • 731 posts
  • Joined: 13-July 02

Posted 26 November 2012 - 21:10

interesting...I've never seen this type of thing before. If I have time later tonight, maybe I'll write a little java program to extract it. It should be easy to extract each layer in memory and just discard each byte as it's read. that way, it wouldn't matter how large the file is. You could just let your cpu churn away. Has anyone tried this just for giggles?

edit: haha what am I talking about. this would never work. I'd run of memory during the extraction. ignore my entire idiotic comment


#17 OP Detection

Detection

    Detecting stuff...

  • 8,369 posts
  • Joined: 30-October 10
  • Location: UK
  • OS: 7 SP1 x64

Posted 26 November 2012 - 21:14

Then you guys had AV software that's actually worth a cent.

A good AV scanner should always stop after a certain time to not trick itself into zip bombs.


Apparently this isn't standard by now? Lord in heaven...

Glassed Silver:mac


I`m using NOD32 and it gives the all clear, and as you saw in VirusTotals scan, 0% of all scanners found any issues / zip bomb

Previous AVs I've used include Avast and Avira <<< High FP detection rate too but nothing

I just tried expanding the first level (which produced several .zip files) and Windows (SearchFilterHost.exe) started using all my 8GB of RAM. I had to terminate the Search Indexer process and then delete the .zip files via the command line. Pretty scary stuff!


There is only 1 file per extraction, the first = 21MB only

interesting...I've never seen this type of thing before. If I have time later tonight, maybe I'll write a little java program to extract it. It should be easy to extract each layer in memory and just discard each byte as it's read. that way, it wouldn't matter how large the file is. You could just let your cpu churn away. Has anyone tried this just for giggles?

edit: haha what am I talking about. this would never work. I'd run of memory during the extraction. ignore my entire idiotic comment


I remember when I first found this, someone did something similar to what you are thinking, managed to partially extract all of the layers and found the txt file at the end - which is how I know what is in there

#18 Lord Method Man

Lord Method Man

    Banned

  • 3,758 posts
  • Joined: 18-September 12

Posted 26 November 2012 - 21:32

Assume a text file with 1 byte per character, the numerical value of a Googolplex would be (1e100) + 1 bytes which is ten duotrigintillion bytes or 9.095e87 terabyes. There isn't enough digital storage on earth to hold this value.


To expand on this, the volume of the Earth is roughly equivalent to the volume occupied by 2.87e24 3.5" hard drives.

Let's say you stored the .txt file across 1 petabyte 3.5" hard drives (don't exist, but pretend they do) you would still need 8.882e84 hard drives, which would take up a volume of space approximately 1e63 times that of earth itself, if my math is correct.

#19 psyko_x

psyko_x

    Neowinian

  • 731 posts
  • Joined: 13-July 02

Posted 26 November 2012 - 21:55

I remember when I first found this, someone did something similar to what you are thinking, managed to partially extract all of the layers and found the txt file at the end - which is how I know what is in there


yea actually, it looks like it should be possible to do that. Java has a class called ZipInputStream (http://docs.oracle.c...nputStream.html) that can read a stream of bytes and inflate it. He probably had a method to inflate a few bytes into a buffer and then make a recursive call with that buffer. You'd have to make sure there aren't so many recursive calls that you'd get a stack overflow, but eventually, you'd get to the inner most level.

#20 LUTZIFER

LUTZIFER

    Resident Evil

  • 2,669 posts
  • Joined: 09-January 02
  • Location: Vancouver Island, BC CANADA
  • OS: Windows 8.1 Pro
  • Phone: Google Nexus 4

Posted 26 November 2012 - 22:13

How is the actual file created since it is so big, and how can it actually be compressed so small?
Does sound interesting. I want to try but I don't have very much drive space left so I wouldn't get too far.

#21 n_K

n_K

    Neowinian Senior

  • 5,376 posts
  • Joined: 19-March 06
  • Location: here.
  • OS: FreeDOS
  • Phone: Nokia 3315

Posted 26 November 2012 - 23:52

From what I've read and whatnot, there is no inner file, you are using hacks and tricks to make it look like there are inner files but there's not any.

#22 MrA

MrA

    b47d2b5288e3c77

  • 2,694 posts
  • Joined: 09-November 03
  • Location: Oz.

Posted 30 November 2012 - 08:06

You might be interested in reading Russ Cox's page on infinite zip files http://research.swtch.com/zip. Basically, he treats the LZ algorithm as a virtual machine, and writes "machine code" to create zip files that do interesting things.

#23 shall_i_cut

shall_i_cut

    Touch me if you dare

  • 455 posts
  • Joined: 27-August 12
  • Location: Philippines
  • OS: Windows 8.1, Windows 7, Android 4.2.1
  • Phone: O+ 8.91

Posted 30 November 2012 - 08:21

Gotta send that file to some random person.... :p