Help moving DB to new server

I’m trying to migrate an old host that is on Psql v14 with timescale version 2.6.1 to a new server with Psql v16 and ts 2.15.1.
I’ve already tried by first cloning the old server to a temporary instance and trying to upgrade that instance. But it always fails.
Tried the online postgresql upgrade procedure without success.
Then I’ve tried raising the new instance with the new versions and trying to backup restore the DBs, but the restore also fails. Throws errors about missing timescale keys, relations and foreign keys violation.

Can someone tell me what steps I can do to migrate those DBs? Thanks.

EDIT:
During dump, this warnings are shown:

....
pg_dump: warning: there are circular foreign-key constraints on this table:
pg_dump:   hypertable
pg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.
pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.
pg_dump: warning: there are circular foreign-key constraints on this table:
pg_dump:   chunk
pg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.
pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.
pg_dump: saving encoding = SQL_ASCII
pg_dump: saving standard_conforming_strings = on
pg_dump: saving search_path = 
pg_dump: saving database definition
pg_dump: dumping contents of table "_timescaledb_cache.cache_inval_bgw_job"
pg_dump: dumping contents of table "_timescaledb_cache.cache_inval_extension"
pg_dump: dumping contents of table "_timescaledb_cache.cache_inval_hypertable"
pg_dump: dumping contents of table "_timescaledb_catalog.hypertable"
pg_dump: dumping contents of table "_timescaledb_catalog.chunk"
pg_dump: dumping contents of table "_timescaledb_catalog.dimension"
pg_dump: dumping contents of table "_timescaledb_catalog.dimension_slice"
pg_dump: dumping contents of table "_timescaledb_catalog.chunk_constraint"
pg_dump: dumping contents of table "_timescaledb_catalog.chunk_data_node"
pg_dump: dumping contents of table "_timescaledb_catalog.chunk_index"
pg_dump: dumping contents of table "_timescaledb_catalog.compression_chunk_size"
pg_dump: dumping contents of table "_timescaledb_catalog.continuous_agg"
pg_dump: dumping contents of table "_timescaledb_catalog.continuous_aggs_bucket_function"
pg_dump: dumping contents of table "_timescaledb_catalog.continuous_aggs_hypertable_invalidation_log"
pg_dump: dumping contents of table "_timescaledb_catalog.continuous_aggs_invalidation_threshold"
pg_dump: dumping contents of table "_timescaledb_catalog.continuous_aggs_materialization_invalidation_log"
pg_dump: dumping contents of table "_timescaledb_catalog.hypertable_compression"
pg_dump: dumping contents of table "_timescaledb_catalog.hypertable_data_node"
pg_dump: dumping contents of table "_timescaledb_catalog.metadata"
pg_dump: dumping contents of table "_timescaledb_catalog.remote_txn"
pg_dump: dumping contents of table "_timescaledb_catalog.tablespace"
pg_dump: dumping contents of table "_timescaledb_config.bgw_job"
pg_dump: dumping contents of table "_timescaledb_internal._hyper_1_1_chunk"
pg_dump: dumping contents of table "_timescaledb_internal._hyper_1_2_chunk"
pg_dump: dumping contents of table "public.ltss"
pg_dump: NOTICE:  hypertable data are in the chunks, no data will be copied
DETAIL:  Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.
HINT:  Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.

Then the restore (this is a secondary testing DB):

$ pg_restore -Fc -v -d homeassistant /var/lib/postgresql/16/main/dump/homeassistant_20240701.sql
pg_restore: connecting to database for restore
pg_restore: creating EXTENSION "timescaledb"
pg_restore: creating COMMENT "EXTENSION timescaledb"
pg_restore: creating TABLE "public.ltss"
pg_restore: creating TABLE "_timescaledb_internal._hyper_1_1_chunk"
pg_restore: creating TABLE "_timescaledb_internal._hyper_1_2_chunk"
pg_restore: creating SEQUENCE "public.ltss_id_seq"
pg_restore: creating SEQUENCE OWNED BY "public.ltss_id_seq"
pg_restore: creating DEFAULT "_timescaledb_internal._hyper_1_1_chunk id"
pg_restore: creating DEFAULT "_timescaledb_internal._hyper_1_2_chunk id"
pg_restore: creating DEFAULT "public.ltss id"
pg_restore: processing data for table "_timescaledb_cache.cache_inval_bgw_job"
pg_restore: processing data for table "_timescaledb_cache.cache_inval_extension"
pg_restore: processing data for table "_timescaledb_cache.cache_inval_hypertable"
pg_restore: processing data for table "_timescaledb_catalog.hypertable"
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 3543; 0 79944 TABLE DATA hypertable postgres
pg_restore: error: could not execute query: ERROR:  column "replication_factor" of relation "hypertable" does not exist
Command was: COPY _timescaledb_catalog.hypertable (id, schema_name, table_name, associated_schema_name, associated_table_prefix, num_dimensions, chunk_sizing_func_schema, chunk_sizing_func_name, chunk_target_size, compression_state, compressed_hypertable_id, replication_factor) FROM stdin;
pg_restore: processing data for table "_timescaledb_catalog.chunk"
pg_restore: from TOC entry 3550; 0 80025 TABLE DATA chunk postgres
pg_restore: error: COPY failed for table "chunk": ERROR:  null value in column "creation_time" of relation "chunk" violates not-null constraint
DETAIL:  Failing row contains (1, 1, _timescaledb_internal, _hyper_1_1_chunk, null, f, 0, f, null).
CONTEXT:  COPY chunk, line 1: "1        1       _timescaledb_internal   _hyper_1_1_chunk        \N      f       0"
pg_restore: processing data for table "_timescaledb_catalog.dimension"
pg_restore: from TOC entry 3546; 0 79992 TABLE DATA dimension postgres
pg_restore: error: COPY failed for table "dimension": ERROR:  insert or update on table "dimension" violates foreign key constraint "dimension_hypertable_id_fkey"
DETAIL:  Key (hypertable_id)=(1) is not present in table "hypertable".
pg_restore: processing data for table "_timescaledb_catalog.dimension_slice"
pg_restore: from TOC entry 3548; 0 80010 TABLE DATA dimension_slice postgres
pg_restore: error: COPY failed for table "dimension_slice": ERROR:  insert or update on table "dimension_slice" violates foreign key constraint "dimension_slice_dimension_id_fkey"
DETAIL:  Key (dimension_id)=(1) is not present in table "dimension".
pg_restore: processing data for table "_timescaledb_catalog.chunk_constraint"
pg_restore: from TOC entry 3552; 0 80047 TABLE DATA chunk_constraint postgres
pg_restore: error: COPY failed for table "chunk_constraint": ERROR:  insert or update on table "chunk_constraint" violates foreign key constraint "chunk_constraint_chunk_id_fkey"
DETAIL:  Key (chunk_id)=(1) is not present in table "chunk".
pg_restore: processing data for table "_timescaledb_catalog.chunk_data_node"
pg_restore: from TOC entry 3555; 0 80080 TABLE DATA chunk_data_node postgres
pg_restore: error: could not execute query: ERROR:  relation "_timescaledb_catalog.chunk_data_node" does not exist
Command was: COPY _timescaledb_catalog.chunk_data_node (chunk_id, node_chunk_id, node_name) FROM stdin;
pg_restore: processing data for table "_timescaledb_catalog.chunk_index"
pg_restore: from TOC entry 3554; 0 80064 TABLE DATA chunk_index postgres
pg_restore: error: COPY failed for table "chunk_index": ERROR:  insert or update on table "chunk_index" violates foreign key constraint "chunk_index_chunk_id_fkey"
DETAIL:  Key (chunk_id)=(1) is not present in table "chunk".
pg_restore: processing data for table "_timescaledb_catalog.compression_chunk_size"
pg_restore: processing data for table "_timescaledb_catalog.continuous_agg"
pg_restore: from TOC entry 3559; 0 80142 TABLE DATA continuous_agg postgres
pg_restore: error: could not execute query: ERROR:  column "bucket_width" of relation "continuous_agg" does not exist
Command was: COPY _timescaledb_catalog.continuous_agg (mat_hypertable_id, raw_hypertable_id, user_view_schema, user_view_name, partial_view_schema, partial_view_name, bucket_width, direct_view_schema, direct_view_name, materialized_only) FROM stdin;
pg_restore: processing data for table "_timescaledb_catalog.continuous_aggs_bucket_function"
pg_restore: from TOC entry 3560; 0 80163 TABLE DATA continuous_aggs_bucket_function postgres
pg_restore: error: could not execute query: ERROR:  column "experimental" of relation "continuous_aggs_bucket_function" does not exist
Command was: COPY _timescaledb_catalog.continuous_aggs_bucket_function (mat_hypertable_id, experimental, name, bucket_width, origin, timezone) FROM stdin;
pg_restore: processing data for table "_timescaledb_catalog.continuous_aggs_hypertable_invalidation_log"
pg_restore: processing data for table "_timescaledb_catalog.continuous_aggs_invalidation_threshold"
pg_restore: processing data for table "_timescaledb_catalog.continuous_aggs_materialization_invalidation_log"
pg_restore: processing data for table "_timescaledb_catalog.hypertable_compression"
pg_restore: from TOC entry 3564; 0 80205 TABLE DATA hypertable_compression postgres
pg_restore: error: could not execute query: ERROR:  relation "_timescaledb_catalog.hypertable_compression" does not exist
Command was: COPY _timescaledb_catalog.hypertable_compression (hypertable_id, attname, compression_algorithm_id, segmentby_column_index, orderby_column_index, orderby_asc, orderby_nullsfirst) FROM stdin;
pg_restore: processing data for table "_timescaledb_catalog.hypertable_data_node"
pg_restore: from TOC entry 3544; 0 79965 TABLE DATA hypertable_data_node postgres
pg_restore: error: could not execute query: ERROR:  relation "_timescaledb_catalog.hypertable_data_node" does not exist
Command was: COPY _timescaledb_catalog.hypertable_data_node (hypertable_id, node_hypertable_id, node_name, block_chunks) FROM stdin;
pg_restore: processing data for table "_timescaledb_catalog.metadata"
pg_restore: processing data for table "_timescaledb_catalog.remote_txn"
pg_restore: from TOC entry 3566; 0 80239 TABLE DATA remote_txn postgres
pg_restore: error: could not execute query: ERROR:  relation "_timescaledb_catalog.remote_txn" does not exist
Command was: COPY _timescaledb_catalog.remote_txn (data_node_name, remote_transaction_id) FROM stdin;
pg_restore: processing data for table "_timescaledb_catalog.tablespace"
pg_restore: processing data for table "_timescaledb_config.bgw_job"
pg_restore: processing data for table "_timescaledb_internal._hyper_1_1_chunk"
pg_restore: processing data for table "_timescaledb_internal._hyper_1_2_chunk"


pg_restore: processing data for table "public.ltss"
pg_restore: executing SEQUENCE SET chunk_constraint_name
pg_restore: executing SEQUENCE SET chunk_id_seq
pg_restore: executing SEQUENCE SET dimension_id_seq
pg_restore: executing SEQUENCE SET dimension_slice_id_seq
pg_restore: executing SEQUENCE SET hypertable_id_seq
pg_restore: executing SEQUENCE SET bgw_job_id_seq
pg_restore: executing SEQUENCE SET ltss_id_seq
pg_restore: creating CONSTRAINT "_timescaledb_internal._hyper_1_1_chunk 1_1_ltss_pkey"

Those errors are simmilar to those shown while inline upgrading.

It stood for 4 hours and then crashed because the connection closed:

pg_restore: from TOC entry 3678; 2606 82713 CONSTRAINT _hyper_1_2_chunk 2_2_ltss_pkey postgres
pg_restore: error: could not execute query: no connection to the server
Command was: ALTER TABLE ONLY _timescaledb_internal._hyper_1_2_chunk
    ADD CONSTRAINT "2_2_ltss_pkey" PRIMARY KEY (id, "time");


pg_restore: creating CONSTRAINT "public.ltss ltss_pkey"
pg_restore: from TOC entry 3669; 2606 81666 CONSTRAINT ltss ltss_pkey postgres
pg_restore: error: could not execute query: no connection to the server
Command was: ALTER TABLE ONLY public.ltss
    ADD CONSTRAINT ltss_pkey PRIMARY KEY (id, "time");


pg_restore: creating INDEX "_timescaledb_internal._hyper_1_1_chunk_ix_ltss_state"
pg_restore: from TOC entry 3673; 1259 81683 INDEX _hyper_1_1_chunk_ix_ltss_state postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX _hyper_1_1_chunk_ix_ltss_state ON _timescaledb_internal._hyper_1_1_chunk USING btree (state);


pg_restore: creating INDEX "_timescaledb_internal._hyper_1_1_chunk_ltss_attributes_idx"
pg_restore: from TOC entry 3674; 1259 81682 INDEX _hyper_1_1_chunk_ltss_attributes_idx postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX _hyper_1_1_chunk_ltss_attributes_idx ON _timescaledb_internal._hyper_1_1_chunk USING gin (attributes);


pg_restore: creating INDEX "_timescaledb_internal._hyper_1_1_chunk_ltss_entityid_time_composite_idx"
pg_restore: from TOC entry 3675; 1259 81681 INDEX _hyper_1_1_chunk_ltss_entityid_time_composite_idx postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX _hyper_1_1_chunk_ltss_entityid_time_composite_idx ON _timescaledb_internal._hyper_1_1_chunk USING btree (entity_id, "time" DESC);


pg_restore: creating INDEX "_timescaledb_internal._hyper_1_1_chunk_ltss_time_idx"
pg_restore: from TOC entry 3676; 1259 81684 INDEX _hyper_1_1_chunk_ltss_time_idx postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX _hyper_1_1_chunk_ltss_time_idx ON _timescaledb_internal._hyper_1_1_chunk USING btree ("time" DESC);


pg_restore: creating INDEX "_timescaledb_internal._hyper_1_2_chunk_ix_ltss_state"
pg_restore: from TOC entry 3679; 1259 82716 INDEX _hyper_1_2_chunk_ix_ltss_state postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX _hyper_1_2_chunk_ix_ltss_state ON _timescaledb_internal._hyper_1_2_chunk USING btree (state);


pg_restore: creating INDEX "_timescaledb_internal._hyper_1_2_chunk_ltss_attributes_idx"
pg_restore: from TOC entry 3680; 1259 82715 INDEX _hyper_1_2_chunk_ltss_attributes_idx postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX _hyper_1_2_chunk_ltss_attributes_idx ON _timescaledb_internal._hyper_1_2_chunk USING gin (attributes);


pg_restore: creating INDEX "_timescaledb_internal._hyper_1_2_chunk_ltss_entityid_time_composite_idx"
pg_restore: from TOC entry 3681; 1259 82714 INDEX _hyper_1_2_chunk_ltss_entityid_time_composite_idx postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX _hyper_1_2_chunk_ltss_entityid_time_composite_idx ON _timescaledb_internal._hyper_1_2_chunk USING btree (entity_id, "time" DESC);


pg_restore: creating INDEX "_timescaledb_internal._hyper_1_2_chunk_ltss_time_idx"
pg_restore: from TOC entry 3682; 1259 82717 INDEX _hyper_1_2_chunk_ltss_time_idx postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX _hyper_1_2_chunk_ltss_time_idx ON _timescaledb_internal._hyper_1_2_chunk USING btree ("time" DESC);


pg_restore: creating INDEX "public.ix_ltss_state"
pg_restore: from TOC entry 3665; 1259 81669 INDEX ix_ltss_state postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX ix_ltss_state ON public.ltss USING btree (state);


pg_restore: creating INDEX "public.ltss_attributes_idx"
pg_restore: from TOC entry 3666; 1259 81668 INDEX ltss_attributes_idx postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX ltss_attributes_idx ON public.ltss USING gin (attributes);


pg_restore: creating INDEX "public.ltss_entityid_time_composite_idx"
pg_restore: from TOC entry 3667; 1259 81667 INDEX ltss_entityid_time_composite_idx postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX ltss_entityid_time_composite_idx ON public.ltss USING btree (entity_id, "time" DESC);


pg_restore: creating INDEX "public.ltss_time_idx"
pg_restore: from TOC entry 3670; 1259 81671 INDEX ltss_time_idx postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE INDEX ltss_time_idx ON public.ltss USING btree ("time" DESC);


pg_restore: creating TRIGGER "public.ltss ts_insert_blocker"
pg_restore: from TOC entry 3683; 2620 81670 TRIGGER ltss ts_insert_blocker postgres
pg_restore: error: could not execute query: no connection to the server
Command was: CREATE TRIGGER ts_insert_blocker BEFORE INSERT ON public.ltss FOR EACH ROW EXECUTE FUNCTION _timescaledb_internal.insert_blocker();


pg_restore: error: could not execute query: no connection to the server
Command was: -- Completed on 2024-07-01 15:21:44 CEST

pg_restore: error: could not execute query: no connection to the server
Command was: --
-- PostgreSQL database dump complete
--

pg_restore: warning: errors ignored on restore: 30

replication_factor is something old, that is not available in the new versions.

If it’s a single hypertable, can you export a csv and load it in the other side with timescaledb-parallel-copy?

In this DB there’s only one hypertable. The main one has a good amount.
What would be the procedure? How would schema be created?
Please excuse me for my lack of knowledge. Had been working all my life on MSSQL.

I’ve tried exporting hypertables to CSV but the result is always empty.
Jobs show: hypertable data are in the chunks, no data will be copied.

Could you please, tell me how you can export hypertables to csv?
Thanks

What tool do you have?

Can you use psql?

Export your data with psql can be simple as \copy

\COPY (SELECT * FROM <TABLE_NAME>) TO <TABLE_NAME>.csv CSV