CMIX

cmix is a lossless data compression program aimed at optimizing compression ratio at the cost of high CPU/memory usage. It gets state of the art results on several compression benchmarks. cmix is free software distributed under the GNU General Public License.

cmix works in Linux, Windows, and Mac OS X. At least 32GB of RAM is recommended to run cmix. Feel free to contact me at byron@byronknoll.com if you have any questions.

GitHub repository: https://github.com/byronknoll/cmix

Downloads

Source Code Release Date Windows Executable cmix-v18.zip August 1, 2019 cmix-v18-windows.zip cmix-v17.zip March 24, 2019 cmix-v17-windows.zip cmix-v16.zip October 3, 2018 cmix-v16-windows.zip cmix-v15.zip May 5, 2018 cmix-v15-windows.zip cmix-v14.zip October 20, 2017 cmix-v14-windows.zip cmix-v13.zip April 24, 2017 cmix-v13-windows.zip cmix-v12.zip November 7, 2016 cmix-v12-windows.zip cmix-v11.zip July 3, 2016 cmix-v11-windows.zip cmix-v10.zip May 30, 2016 cmix-v10-windows.zip cmix-v9.zip April 8, 2016 cmix-v9-windows.zip cmix-v8.zip November 10, 2015 cmix-v7.zip February 4, 2015 cmix-v6.zip September 2, 2014 cmix-v5.zip August 13, 2014 cmix-v4.zip July 23, 2014 cmix-v3.zip June 27, 2014 cmix-v2.zip May 29, 2014 cmix-v1.zip April 13, 2014

Benchmarks

Corpus Original size

(bytes) Compressed size

(bytes) Compression time

(seconds) Memory usage

(KiB) calgary.tar 3152896 538838 2293.72 22655312 silesia 211938580 28437634 enwik6 1000000 176377 677.24 20399432 enwik8 100000000 14838332 57035.44 23480864 enwik9 1000000000 115714367 601569.89 25738196

Compression and decompression time are symmetric. The compressed size can vary slightly depending on the compiler settings used to build the executable.

External Benchmarks

Silesia Open Source Compression Benchmark

File Original size

(bytes) Compressed size

(bytes) dickens 10192446 1813095 mozilla 51220480 6717412 mr 9970564 1829883 nci 33553445 792994 ooffice 6152192 1226244 osdb 10085684 1962336 reymont 6627202 712062 samba 21606400 1614935 sao 7251944 3727061 webster 41458703 4297002 xml 5345280 236101 x-ray 8474240 3508509

Calgary Corpus

File Original size

(bytes) Compressed size

(bytes) BIB 111261 17339 BOOK1 768771 174895 BOOK2 610856 106931 GEO 102400 42833 NEWS 377109 77582 OBJ1 21504 7065 OBJ2 246814 40410 PAPER1 53161 10908 PAPER2 82199 17273 PIC 513216 21904 PROGC 39611 8377 PROGL 71646 8991 PROGP 49379 6269 TRANS 93695 10097

Canterbury Corpus

File Original size

(bytes) Compressed size

(bytes) alice29.txt 152089 31292 asyoulik.txt 125179 29636 cp.html 24603 4793 fields.c 11150 1959 grammar.lsp 3721 785 kennedy.xls 1029744 8039 lcet10.txt 426754 73886 plrabn12.txt 481861 112824 ptt5 513216 21904 sum 38240 6880 xargs.1 4227 1131

Description

I started working on cmix in December 2013. Most of the ideas I implemented came from the book Data Compression Explained by Matt Mahoney.

cmix uses three main components:

Preprocessing Model prediction Context mixing

The preprocessing stage transforms the input data into a form which is more easily compressible. This data is then compressed using a single pass, one bit at a time. cmix generates a probabilistic prediction for each bit and the probability is encoded using arithmetic coding.

cmix uses an ensemble of independent models to predict the probability of each bit in the input stream. The model predictions are combined into a single probability using a context mixing algorithm. The output of the context mixer is refined using an algorithm called secondary symbol estimation (SSE).

Architecture

Preprocessing

cmix uses a transformation on three types of data:

Binary executables Natural language text Images

The preprocessor uses separate components for detecting the type of data and actually doing the transformation.

For images and binary executables, I used code for detection and transformation from the open source paq8pxd program.

I wrote my own code for detecting and transforming natural language text. It uses an English dictionary and a word replacing transform. The dictionary comes from the phda Hutter prize entry and is 415,377 bytes.

As seen on the Silesia benchmark, additional preprocessing using the precomp program can improve cmix compression on some files.

Model Prediction

cmix v18 uses a total of 2,122 independent models. There are a variety of different types of models, some specialized for certain types of data such as text, executables, or images. For each bit of input data, each model outputs a single floating point number, representing the probability that the next bit of data will be a 1. The majority of the models come from other open source compression programs: paq8l, paq8pxd, and paq8hp12any.

LSTM Mixer

The byte-level mixer uses long short-term memory (LSTM) trained using backpropagation through time. It uses Adam optimization with layer normalization and learning rate decay. The LSTM forget and input gates are coupled. I created two other projects which compress data using only LSTM: lstm-compress and tensorflow-compress. Their results are posted on the Large Text Compression Benchmark.

Context Mixing

cmix uses a similar neural network architecture to paq8. This architecture is also known as a gated linear network. cmix uses three layers of weights.

Loss function : cross entropy + L2 regularizer

: cross entropy + L2 regularizer Activation function : logistic

: logistic Optimization procedure : stochastic gradient descent

: stochastic gradient descent Every neuron in the network directly tries to minimize cross entropy, so there is no backpropagation of gradients between layers.

The inputs to each neuron (values between 0 to 1) are transformed using the logit function.

Only a small subset of neurons are activated for each prediction. The activations are based on manually defined contexts (i.e. functions of the recent input history). One neuron is activated for each context. The context-dependent activations improve prediction and reduce computational complexity.

Instead of using a global learning rate, each context set has its own learning rate parameter. There is also learning rate decay.

Comparison to PAQ8

In terms of performance, cmix typically has a better compression rate but is slower and uses more memory. cmix uses more predictive models than most PAQ8 variants.

cmix uses a larger context mixing network. There are also several implementation details which differ (e.g. floating point arithmetic, learning rate decay, number of layers, etc).

cmix uses two separate PAQ8 branches as internal models. One of the branches is paq8hp12any (an early Hutter Prize submission). The other branch is a hybrid of several PAQ8 programs (i.e. a custom version of PAQ8 unique to cmix).

The LSTM component is unique to cmix.

The use of mod_ppmd. This is a PPM implementation that produces byte-level predictions. The predictions are used as input to the LSTM.

Acknowledgements

cmix shares many similarities to the PAQ8 family of compression programs. There are many different branches of PAQ8. Here are some of the major differences between cmix and other PAQ8 variants:

Thanks to AI Grant for funding cmix.

cmix uses ideas and source code from many people in the data compression community. Here are some of the major contributors: