The Teradata DBMS has always supported compression in a (growing) number of forms.

Version 5.0 introduced Multi-Value Compression (MVC), version 13.10 introduced Algorithmic (ALC) and Block Level (BLC) compression, and version 14.0 introduced multiple automatic compression options for Columnar tables.

These can produce disk space and associated performance savings; however, implementing Teradata compression with MVC or ALC takes man-time, which has a cost associated with it.

It also needs to be remembered that because data demographics change over time, analysis of either of these types of compression needs to be revisited regularly (which may only be once a year).

Yes, some tools will help with this, but the point of this article is really to ask if the time you are spending on compression is worthwhile? Is it giving you benefits that are commensurate with the costs? Like many things, this is a question of “cost vs. benefit”.

They have about 98000 tables.

About 26000 tables have MVC implemented.

On their system, only 4000 tables are big enough that a full table scan is estimated to take longer than 1 second – even allowing for a reduced scan rate because of concurrent queries. On one recent customer engagement, I produced the following analysis for them:

I am not recommending that you remove existing MVC compression on small tables. The use of MVC doesn’t impose any real CPU overhead, and eliminating compression on many tables will be a lot of effort for negligible if any gain. If you have ALC applied to a small table, then you might get some savings.

I would recommend that when designing new tables or possibly when making changes to existing tables, the compression setting should be carefully considered.

If that table is likely to be small, I would consider not applying any compression. This is more change in a process than any particular remedial action that needs to be taken.