I am running an upgrade scenario where I am adding a column to an existing table, it works fine for new installations where there is no data and no chunks, but in the case of an upgrade scenario, I am getting errors as mentioned below.
ALTER TABLE timeseries SET (timescaledb.compress=false); |
|
|
|
|
|
|
ALTER TABLE eventvalues ADD COLUMN IF NOT EXISTS insertiontime timestamp DEFAULT CURRENT_TIMESTAMP; |
|
|
|
UPDATE timeseries SET insertiontime = NULL WHERE insertiontime IS DISTINCT FROM NULL; |
Existing compression policy:
SELECT set_chunk_time_interval('timeseries ', INTERVAL 8 hours);
ALTER TABLE timeseries SET (timescaledb.compress, timescaledb.compress_segmentby = βobjectidβ, timescaledb.compress_orderby = βtimestamp DESCβ);
SELECT add_compression_policy('timeseries ', INTERVAL 8 hours);
Error:
: ERROR: cannot change configuration on already compressed chunks
DETAIL: There are compressed chunks that prevent changing the existing compression configuration.
CONTEXT: SQL statement βALTER TABLE timeseries SET (timescaledb.compress=false)β
In case chunks are already created and we want to add a column to hyper table, then Is it required to drop hyper table to insert a new column?
Yes, you can. Input;
drop table data cascade;
CREATE TABLE data (
time TIMESTAMP with time zone NOT NULL,
value FLOAT NOT NULL
);
SELECT create_hypertable('data', 'time');
INSERT INTO data (time, value) VALUES
('2023-03-28 12:00:00', 1.0),
('2023-03-28 12:01:00', 2.0),
('2023-03-28 12:02:00', 3.0),
('2023-03-28 12:03:00', 4.0),
('2023-03-28 12:04:00', 5.0),
('2023-03-28 12:05:00', 6.0);
ALTER TABLE data SET (
timescaledb.compress = true
);
SELECT compress_chunk(i) FROM show_chunks('data') i;
ALTER TABLE data ADD COLUMN new_column INTEGER;
SELECT * FROM data;
Output
DROP TABLE
CREATE TABLE
βββββββββββββββββββββββ
β create_hypertable β
βββββββββββββββββββββββ€
β (802,public,data,t) β
βββββββββββββββββββββββ
(1 row)
INSERT 0 6
ALTER TABLE
ββββββββββββββββββββββββββββββββββββββββββββββ
β compress_chunk β
ββββββββββββββββββββββββββββββββββββββββββββββ€
β _timescaledb_internal._hyper_802_760_chunk β
ββββββββββββββββββββββββββββββββββββββββββββββ
(1 row)
ALTER TABLE
ββββββββββββββββββββββββββ¬ββββββββ¬βββββββββββββ
β time β value β new_column β
ββββββββββββββββββββββββββΌββββββββΌβββββββββββββ€
β 2023-03-28 12:05:00-03 β 6 β β
β 2023-03-28 12:04:00-03 β 5 β β
β 2023-03-28 12:03:00-03 β 4 β β
β 2023-03-28 12:02:00-03 β 3 β β
β 2023-03-28 12:01:00-03 β 2 β β
β 2023-03-28 12:00:00-03 β 1 β β
ββββββββββββββββββββββββββ΄ββββββββ΄βββββββββββββ
(6 rows)
@jonatasdp But with the given approach there will be loss of data, where we need to create a new hypertable. Is there any way by which we can insert new column into compressed chunks without disturbing existing data.
I donβt think timescale can handle multiple chunks from the same hypertable using different schemas.
Is it too slow to add the new column?
How will it loss data? I donβt understand how are you going to loss data.
Please ignore the previous comment,
While running command " ALTER TABLE data ADD COLUMN new_column INTEGER;" I am getting the following error βERROR: cannot add column with non-constant default expression to a hypertable that has compression enabled
SQL state: 0A000β
Please, update your timescaledb extension for the latest versions which itβs allowed. I have here 2.9 and itβs already allowed.
Let me try and come back, Thank you @jonatasdp
1 Like