I opened this issue on pgbackrest (pgbackrest/pgbackrest#2134), it’s not very clear to me.
I have already configured pgbackrest on a postgres instance, I make backups there, can I take them and take them to another postgres instance not connected in any way to the first one?
Hi @jonatasdp, actually no, it’s still not clear to me how to make a timescale restore running on docker, because being the main process, when it’s stopped, the docker container also dies and you need to restart it.
For now I was just trying to connect 2 timescale servers, one which hosts the timescale instance and also pgbackrest which does the backups and the other where I would like to restore the first database.
At the moment I haven’t considered using cloud repositories to store the backups, I’m saving everything locally and I was figuring out how to put the two servers in communication, via ssh, to then perform a restore.
Do you have any advice or anything else on this type of architecture?
The restore is only for development purposes, to check that everything is ok with the data, obviously having extra backups in case of disasters is always good.
I was looking for an effective method to do this, but at this point I don’t know when pgbackrest is so recommended, I already have a ““backup”” system as illustrated in the timescale guide that exports hypertables to csv, but I wanted something that weighed less .
I tried copying all the timescale data folder but this method is successful only when the timescale server is off, which I would like to avoid.
Hi Giuseppe! I think for this case you’ll need to have at least a network shared between the instances.
Or bind your local network for both process.
If you have a docker compose with a network both instances can be alive and exchange information. Or you can simply bind ports like here.