Hi All!
I had a quick question about automatic failover with distributed TSDB using distributed hyper tables.
I was wondering if this can be done with patroni or a similar agent?
Basically the steps i found are to:
- remove the downed data node
- for each of the now under-replicated chunks, replicate these chunks on another data node
From what I saw patroni works with standbys and not removing the downed node and manually replicating the under-replicated chunks.
Thanks!
James
Welcome to the Timescale forum @jamessalzman
Although this is an oldish post, as you can see for Timescale Cloud we use Patroni.
You might also enjoy this post about backup and recovery on Timescale Cloud too.
Take a look at both of those but if they donāt answer your questions head back in and let us knowā¦ Iāll try to get you the right guidance
A couple of other references for you:
1 Like
Hi @LorraineP !
Sorry maybe I have some misunderstanding.
I am trying to achieve HA with access node & data node achitecture. Splitting the data into partitions using a distributed hypertable.
From what you have sent - that is only for master vs replica copies right? (Standby nodes)
Thanks
My bad! I missed the distributed word! Bear with, Iāll try to get some better on this
Right.
I was just unsure if there is a suggested agent or way of implementing the automatic failover for this.
Thanks!
Bump. Does anyone have any tips?
Thanks!
Hi @jamessalzman
I think weāre not quite getting whatās missing in the docs. Are you looking for a ācookbookā type example? Only Iām not sure that we have one. From the page Erik shared:
For production environments, we recommend setting up standbys for each node in a multi-node cluster.
and then this one
Using standby nodes relies on streaming replication and you set it up in a similar way to configuring single-node HA, although the configuration needs to be applied to each node independently.
This in turn will land you up at this page: Timescale Documentation | High availability and what weāre suggesting is that you set up streaming replication for each node of the multi-node network.
Regarding failover, at the bottom of that page:
PostgreSQL provides some failover functionality, where the replica is promoted to primary in the event of a failure. This is provided using the pg_ctl command or the trigger_file
. However, PostgreSQL does not provide support for automatic failover. For more information, see the PostgreSQL failover documentation. If you require a configurable high availability solution with automatic failover functionality, check out Patroni.
But I donāt think we have worked examples beyond that Iām afraid.
Hi @LorraineP,
What I was looking for is a suggested way to implement HA with chunk level replication.
Similar to how high-availability configurations for single-node PostgreSQL uses a system like Patroni for automatically handling fail-over, native replication requires an external entity to orchestrate fail-over, chunk re-replication, and data node management. This orchestration is not provided by default in TimescaleDB and therefore needs to be implemented separately. The sections below describe how to enable native replication and the steps involved to implement high availability in case of node failures.
What I am looking for is a patroni equivalent for native replication.
1 Like
Hi @jamessalzman . Native replication is still in development and there is no off-the-shelf solution for handling failure events currently. But it could be as simple as having a hook in, e.g., AWS or Kubernetes for calling delete_data_node() on the access node when the node fails. Then replacing that node with another one.
As mentioned above, however, the failure handling is still under development. For instance, the ability to re-replicate data after a node failure is still experimental and lacking some functionality.
If you want to experiment yourself with this functionality, we are more than happy to receive feedback and ideas.
Hi @Erik_Nordstrom that was very helpful. I am in the process of making an agent āsuiteā if you will that can do this.
It is working well except I have a small impediment using compressed distributed hypertables.
I am testing failover with multi-node and compressed hypertables. When I use copy_chunk against a new node being added to the cluster in order to replicate the chunk, it sometimes get this error
CALL timescaledb_experimental.copy_chunk('_timescaledb_internal._dist_hyper_3_303_chunk','dn3','dn4') failed - **[dn4]: relation "compress_hyper_2_566_chunk" already exists**
What I am wondering is this : is it possible to get the mapping between regular chunks and hypertable chunks?
Is there a table somewhere or function i can use where i can look at the chunk and then determine what is the associated compressed_chunk?
Thanks again!
James