Is it possible to add unique constrain on a compressed chunks

I have a scenario where I need to add a unique constraint to an existing hypertable in TimescaleDB, which currently has compressed data. The process I am following now involves decompressing all the data before adding the unique constraint and then compressing it again.

Given the large volume of data, this approach is causing us to run out of memory during decompression.

TimescaleDB 2.14 | db<>fiddle (dbfiddle.uk)

Is there an alternative method to add the unique constraint without decompressing the data, considering the memory limitations we’re facing?

Hi Mohammed, I don’t think we have any alternative method. But we’re on track of such issue in some extent: [Enhancement]: Remove block for unique index creation and constraint creation on compressed hypertables · Issue #5595 · timescale/timescaledb · GitHub

1 Like

Thank you @jonatasdp for quick update.

@jonatasdp we are in a big trouble where uncompression is causing entire disk running out of space. Is there any plan to fix this?

@Mohammed_Iliyas_pate can you share more details? Because if your chunk needs a disk space, you’ll be requiring it from the disk. Is it requiring more than the actual chunk compressed + uncompressed?

I have a table containing compressed data (approximately 200+ GB in size). Due to a design change, I need to drop the existing constraint and introduce a new unique constraint on this table. However, when I attempt to create the new constraint, I encounter an error stating that the operation cannot be performed on compressed data. To resolve this, I need to uncompress the data, drop the existing primary key constraint, and then add the new constraint. Unfortunately, when I try to uncompress the data, its size increases to over 500 GB, causing the pods to crash. Additionally, increasing disk size in the production environment could be challenging for the customer.

I’d then go in a direction of thinking on a new hypertable and migrate with the new constraint + compression enabled. As soon as I migrate I drop the old data and have an extra temporary view to join data from both sources while it’s not finished.

Share the learnings with us later @Mohammed_Iliyas_pate :bowing_man:

Currently, we have rolled back the planned feature because we lacked confidence in it due to the time and space it required during an upgrade.

Solution: A new table was created with updated constraints and policies. Depending on the timeframe, a union query will be invoked to combine data from both the old and new tables. Once the retention period for the old table expires, we will proceed to delete the old table along with its associated policies

1 Like