tungwaiyip.info

home

about me

links

Blog

< April 2010 >
SuMoTuWeThFrSa
     1 2 3
4 5 6 7 8 910
11121314151617
18192021222324
252627282930 

past articles »

Click for San Francisco, California Forecast

San Francisco, USA

 

Data Compression Comparison

This is a follow up on my last post about data compression. After encoded my numerical data in a compact CSV format, I apply data compression before storing it in the disk. I have done a quick study on the two algorithm available in standard Python library, gzip and bzip2. The result is shown below. The original message's size is 537,776 bytes.

Gzip compression Result

Compression Level Compressed Size Compress time Decompress time
9 183,019 179 ms 5.51 ms
6 184,532 125 ms 5.48 ms
3 203,105 38.2 ms 5.54 ms

Bzip2 compression Result

Compression Level Compressed Size Compress time Decompress time
9 152,283 84.3 ms 29 ms
6 152,283 84.9 ms 29 ms
3 157,065 80.6 ms 26.9 ms
1 166,949 79.8 ms 26.7 ms

Surprisingly, bzip compress faster than gzip at level 9. Unfortunately compression performance is the least important for me. Compression ratio and decompression performance is far more important. Compression is only done one time. But fetching and decompressing the data is going to be done many times. It is hard for me to choose between the better compression ratio of bzip or the faster decompression time of gzip. For now I think I will stick with gzip.

2010.04.21 [] - comments

 

The Power of Gzip

I know Information Theory says that good data compression will shrink a message down to its entropy. So for application developers, it is not productive to design our own spacing saving encoding scheme if we plan to apply data compression at the end anyway. Because the original message and the encoded message contain the same amount of information, the compressed data will end up with approximately the same size.

I don't realize how true it is until I have actually tried it. I am working with a CSV file with mostly integer data. I am very keen on reducing its size to save storage and network bandwidth. So I tried several schemes. They all failed to make significant saving once gzipped.

The first attempt is on the minus sign. I notice there are a lot of negative numbers. The '-' sign occupies one bytes, but it only carries one bit of information. What if I apply a simple encoding, e.g. using 'A' to stand for '-1', and 'B' stand for '-2' and so on? Trimming the negative sign with this encoding cut down the storage by 6%.

  e.g.
    "108,-2,-10"  ->  "108,B,A0"

What about the result after gzipping? Gzip shrinks the original data down to 34%. For the encoded message, it is 36%. The difference between the two? A negligible 0.1%.

Next attempt, it seems wasteful to store an integer as string using only 10 decimal digits per character. What if we use the hexadecimal representation? The conversion is trivial and it should cut down the string length a bit. If this is fruitful we may even try to use a higher base. Using the hexadecimal scheme, we reduce the storage by 7%. But once gzipped, the saving again evaporates.

A far more lucrative approach is to abandon text format altogether and use binary encoding for the numbers. Since the order of the number differ a lot, I use a kind of variable length integer encoding to make it economical for both small and large numbers. The binary encoding deliver the most significant saving by cutting down the storage by 44%. The text data and the binary encoded data seem very different initially, not to mention its size is nearly half of the original. But once gzipped, the binary data is only 4% smaller. Despite the big difference in representation, the compressed data is still proportional to the entropy. The 4% gain is hardly enough to justify using binary format over text.

The lesson learned? Don't be too concern about the efficiency of storing number in text format like CSV. Data compression will take out the inefficiency in one easy step.

Finally I like to mention some encoding that works for me. The data is initially available in XML format. Dropping the XML baggage and store it in CSV format saves a lot. Secondly, storing only the delta of the numbers works very well in my application. Furthermore, slightly reducing the precision of the numbers, a sort of lossy compression, also deliver a meaningful saving. More importantly, the saving still present after compression.

2010.04.19 [] - comments

 

past articles »

 

BBC News

 

Afghan contractors: 'I wish I'd never worked for the UK government' (16 Aug 2022)

 

Trump warrant: Prosecutors oppose releasing search evidence (16 Aug 2022)

 

Scott Morrison: Ex-Australia PM held five additional portfolios, Albanese says (16 Aug 2022)

 

Sacheen Littlefeather: Oscars apologises to actress after 50 years (16 Aug 2022)

 

Five men held in Ukraine deny being mercenaries (16 Aug 2022)

 

Norway bridge collapse: Drivers of two vehicles rescued (15 Aug 2022)

 

Body found in world's highest battlefield 38 years on (16 Aug 2022)

 

Egypt church fire: Triplets and twins among 15 children killed (15 Aug 2022)

 

Kenya election result: William Ruto wins presidential poll (15 Aug 2022)

 

German households face levy of hundreds of pounds on gas bills (15 Aug 2022)

more »

 

SF Gate

 

These were the highest-paying Silicon Valley tech companies in 2021 (19 Jul 2022)

 

'This will be somewhat painful': Elon Musk talks Twitter takeover in extended interview at TED2022 Vancouver (14 Apr 2022)

 

Best Background Check Services in 2022, Top 13 People Finder Sites Reviewed (5 Apr 2022)

 

Better.com employees got severance checks before they were laid off (9 Mar 2022)

 

Apple joins other Bay Area tech giants in responding to Russian invasion of Ukraine (1 Mar 2022)

 

Uber will now show you how you’re rated by drivers (18 Feb 2022)

more »


Site feed Updated: 2022-Aug-16 00:00