Hi,
we are using TimescaleDB 2.14.2 on PostgreSQL 15.6 on Red Hat 9.3 self-host on prem.
I have been reading Managing Transaction Wraparound article and executing SELECT statements from article I have found out that a lot of my tables are approaching forced autovacuum. All of this tables are old TimescaleDB chunk tables.
Because of nature of our database we are forced to do some updates on chunk tables, but we are doing this only on current chunks. Autovacuum is turned on, but it is triggered only on threshold, therefore some of chunks may not hit autovacuum threshold and so some old tuples may be uncleansed from those old chunks. Those old chunks have old transaction ID stored in table’s metadata. And now after few years there are approaching 200-million transaction ID. Because default PostgreSQL parameter autovacuum_freeze_max_age is set to 200-million we are hitting forced autovacuum randomly inside day. What I don’t want is this forced autovacuum to be fired at the busiest time of the day to additionally slow down of server.
What is the recommended approach for this forced autovacuum old chunks?
I was thinking of:
- Increasing randomly autovacuum_freeze_max_age PostgreSQL parameter.
- I can write script that will autovacuum old chunks at the most appropriate time of the day.
- I can compress those old chunks, but I would like to avoid this, because time to time they are still used a lot by selects and I don’t want to have negative decompression effect on database performance.
Is there something else that is recommended? I would like to touch this old chunks as less as possible.
Thanks