Storage tiering for an expanding on-premise implementation of Timescale

We are developing an on-premise implementation of Timescale that captures raw data from a fleet of field installed instruments. The data retention requirements are such that we must retain the data indefinitely, despite the need to query the older data being rather infrequent. Hence we must contemplate strategies that will allow us to grow the storage in a cost effective way.

The obvious first step is to implement chunk compression, however aside from that we are also wishing to understand other strategies that will support cost effective storage growth. The Timescale documentation offers an enticing prospect in the use of Data tiering , but this is flagged as an early access feature, and it is not at all clear whether this is available to implementations where the primary installation is an on-premise – self managed deployment. (Is it?)

Another option for on-premise data tiering seems plausible through the careful use of Table spaces (Timescale Documentation | About tablespaces), with different table spaces being set up on different tiers of on-prem storage. However, the documentation on the use of table spaces is very light, and it is unclear how it might be setup such that as chunks age, they can automatically be moved from faster more expensive tablespace storage to lower tier table space storage.

My question therefore is: What are the most proven strategies for effective tiering of hypertable chunks in an ever exanding self-managed, on-prem deployment? Is there any documentation that explains these configurations?

Hi Brett, here is an example of the background job that can automatically move chunks to other tablespaces. I think this is the most advanced scenario of background jobs you’ll find around :smiley:

1 Like

Very helpful. Thankyou.

1 Like