That will not last very long if you have a busy database, doing many writes over the day. MVCC keeps the new and old versions of a row in the table, and the TXID will increase with every transaction. At some point the 4 billion transactions are reached, the TXID will overrun, and start again at the beginning. The way transactions are working in PostgreSQL, suddenly all data in your database will become invisible. No one wants that!

To refresh: PostgreSQL is currently using 32 bits for the TXID, and is good for around 4 billion transactions:

Now back to the original problem: 64 bit transaction ids. As we have seen above, it might still make sense to expand the id range beyond the currently used 32 bit.

The problem is not new, and was discussed a number times. From the last discussions it looks like that the path forward will be a mix of 32 bit transaction ids, and an additional epoch. The underlying problem is that expanding to true 64 bit will instantly double the space requirements for the TXID in every single row (tuple header), from 32 bit today to 64 bit. And it's used twice, for xmin and xmax, that makes it 16 bytes just for the transaction ids.

But how long will 64 bit really last? And most importantly, is it enough for the foreseeable future?

To answer this question we first need to find out how many transaction ids a database will use. There are a number blog post which aim at the general problem, like "how many transactions can a database do these days" or "how did transaction performance in PostgreSQL improve over time"?

From these blog posts it seems like the near future aim is "1 million transactions per second". Now that is read and write transactions, and the second blog post is more in the "several thousand write transactions per second" range. But nevertheless let's just use this number, and see if that will be alright.

64 bit transaction ids is an insanely large number:

fosdem=# SELECT 2::NUMERIC^64::NUMERIC; ?column? --------------------------------------- 18446744073709551616.0000000000000000 (1 row)

Eighteen quintillion, four hundred forty six quadrillion, seven hundred forty four trillion, seventy three billion, seven hundred nine million, five hundred fifty one thousand, six hundred sixteen.

Assuming that 1M transactions will be used per second, this leaves:

fosdem=# SELECT (2::NUMERIC^64::NUMERIC) / (10::NUMERIC^6::NUMERIC) / 2::NUMERIC; ?column? -------------------------------- 9223372036854.7758080000000000 (1 row)

Since I want to know the time which the 64 bit transaction ids will last, I not only need to divide by 1M (10^6), but also by 2 - because PostgreSQL splits transaction ids into the visible and invisible part. The result is still large, let's see how much time that is:

fosdem=# SELECT 3600*24*365; ?column? ---------- 31536000 (1 row)

A year has 31536000 seconds (without leap year).

fosdem=# SELECT 9223372036854.775808::NUMERIC / 31536000::NUMERIC; ?column? --------------------- 292471.208677536016 (1 row)

More than 292 thousand years, doing 1M write transactions every second.

Now that assumes linear usage of transaction ids. Using an epoch might change that, depending on the implementation. But it does not seem as if the database is running out of 64 bit transaction ids anytime soon.