We want to migrate old data (no hypertables) to our Timescale Instance where we use compression. Hence some of that data is already compressed.
While the approach described here seems to work fine, we have the challenge that for one it is a lot of data to migrate and there is already a lot of data compressed.
I did some testing and even with around 2 Billion of entries already existing and reading with the time_bucket
function at the same time where we insert the old data seems to work. Obviously with increased query duration. But the described approach on the documentation would be our way to go forward now.
Did anyone had to deal with such scenarios and has some insights?
Are there better ideas to to such a migration of old data with enabled compression?
Our team is quite new to Timescale, so any input is appreciated.