Google Releases New Compression Algorithm


Recommended Posts

The Zopfli Compression Algorithm is a new open sourced general purpose data compression library that got its name from a Swiss bread recipe. It is an implementation of the Deflate compression algorithm that creates a smaller output size compared to previous techniques. The smaller compressed size allows for better space utilization, faster data transmission, and lower web page load latencies. Furthermore, the smaller compressed size has additional benefits in mobile use, such as lower data transfer fees and reduced battery use. The higher data density is achieved by using more exhaustive compression techniques, which make the compression a lot slower, but do not affect the decompression speed. The exhaustive method is based on iterating entropy modeling and a shortest path search algorithm to find a low bit cost path through the graph of all possible deflate representations.

The output generated by Zopfli is typically 3?8% smaller (PDF warning) compared to zlib at maximum compression, and we believe that Zopfli represents the state of the art in Deflate-compatible compression. Zopfli is written in C for portability. It is a compression-only library; existing software can decompress the data. Zopfli is bit-stream compatible with compression used in gzip, Zip, PNG, HTTP requests, and others.

Due to the amount of CPU time required --- 2 to 3 orders of magnitude more than zlib at maximum quality ? Zopfli is best suited for applications where data is compressed once and sent over a network many times, for example, static content for the web. By open sourcing Zopfli, thus allowing webmasters to better optimize the size of frequently accessed static content, we hope to make the Internet a bit faster for all of us.

Source: http://googledevelopers.blogspot.com/2013/02/compress-data-more-densely-with-zopfli.html
Link to comment
Share on other sites

Interesting. Wonder how it does against LZMA and LZMA2 considering both reasonable speed and compression ratio.

Link to comment
Share on other sites

Wonder how it does against LZMA and LZMA2 considering both reasonable speed and compression ratio.
This is about DEFLATE-compatible compressions, so LZMA/LZMA2 don't enter into the picture.

But yes, LZMA/LZMA2 traditionally has superior compression ratio. Too bad that still not used in HTTP transfer-encoding.

Link to comment
Share on other sites

I read the PDF. It's decent, but has a very specific use case. That is: One-time compression and multiple decompressions for static data (they do acknowledge this). Compressing files using this algo is epic slow compared to gzip for the same file (talking 5 seconds for gzip compared to 7 minutes for the same file using Zopfli). It'll be very good for conserving bandwidth on, say, file distribution sites though where CPU time is inconsequential compared to bandwidth requirements.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.