Currently we are using timescaledb 2.7.2
Once every 10 seconds, we write a few thousand records (~3000 to 10,000) to a timescale table, from a golang program.
Earlier we did some profiling and found that if we use pq.Copy
that was the fastest and went with it, instead of the pq.Insert or pq.Exec("INSERT INTO ...
But now, we find that, when the pq.Copy
call is going on, we find the reads to take some time. Are there any locks that pq.Copy
may take, which may starve the read calls from completing at that time ? Is it recommended to use the pq.Copy
or should we just insert via INSERT statements ? Are there any tricks that we can do to improve our ingest performance ? Writing each INSERT in a network call, will affect our throughput. That is why we decided on the pq.Copy
approach earlier.
Or is the pq.Copy
a valid approach and we should try to profile why our reads are slowing down, more scientifically ?