Hello everybody!
We are currently facing very slow planning times on our continuous aggregates on timescale cloud. Compared to the underlying hypertables of the cagg, we are seeing planning times in the range of 5 seconds and above, where in the original hypertable these are even under load at around < 500 ms. We are running a few hundred queries in parallel to the caggs.
I will try to explain the steps we did so far and how the schemas look at the moment:
Hardware specs: 1 x 4 cpu/16gb ram primary + 1 x 4 cpu/16gb ram replica, both running on timescale cloud
Timescale version: 2.8.0
Current chunk count:
- candles_binance_60 → 1,9k
- candles_binance_900 → 189
Over the last 1 week we tried to prepare our database with candle (ohlc) data from different crypto exchanges (binance, coinbase etc.). The dataset uncompressed is around 300-400 gigabyte.
We did a lot of initial investigation for choosing the correct chunk size for each exchange (since they have different amount of instruments). We base our assumption currently on a server with 4 cpu cores/16gb ram (+1 replica). We looked up best practices for the chunks and found the recommendation from the timescale team that you should be able to fit one chunk of all your hypertables in around 25% (shared_buffers) of the available memory. The following chunks in each table reflect that (below 400 Megabyte for each chunk).
We have the following table definitions:
- timestamp function, since our time columns are based on unix timestamp in milliseconds:
CREATE OR REPLACE FUNCTION unix_milliseconds() returns BIGINT
LANGUAGE SQL
STABLE as
$$
SELECT extract(epoch from now())::BIGINT * 1000
$$;
- example table definition for the hypertable for each exchange (all follow the same schema, just different name of the table):
CREATE TABLE candles_binance_60
(
start BIGINT NOT NULL,
instrument TEXT NOT NULL,
open NUMERIC(30, 15) NOT NULL,
high NUMERIC(30, 15) NOT NULL,
low NUMERIC(30, 15) NOT NULL,
close NUMERIC(30, 15) NOT NULL,
volume NUMERIC(30, 15) NOT NULL
);
SELECT create_hypertable('candles_binance_60', 'start', chunk_time_interval => 43200000);
SELECT set_integer_now_func('candles_binance_60', 'unix_milliseconds');
- We have an index on the
instrument
column, since we query that additionally to the time:
create unique index on candles_binance_60(instrument, start DESC);
- Then we have views for different intervals, based on the 60 second intervals of each candle. An example view for the interval 900:
CREATE MATERIALIZED VIEW candles_binance_900
WITH (timescaledb.continuous) AS
SELECT
time_bucket(900000, start) AS bucket_start,
instrument,
FIRST(open, start) AS "open",
MAX(high) AS high,
MIN(low) AS low,
LAST(close, start) AS "close",
SUM(volume) as volume
FROM candles_binance_60
GROUP BY instrument, bucket_start
WITH NO DATA;
SELECT set_integer_now_func(
(
SELECT concat(materialization_hypertable_schema, '.', materialization_hypertable_name)
FROM timescaledb_information.continuous_aggregates
WHERE hypertable_name='candles_binance_60' and view_name='candles_binance_900'
),
'unix_milliseconds');
During the initial import we ran the following steps:
- Create tables
- Create views with
NO DATA
- Import the CSV data via timescaledb-parallel-copy
- Create the index on the hypertable
- Compress the hypertable (after 2 months of data)
ALTER TABLE candles_binance_60
SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'instrument'
);
SELECT add_compression_policy('candles_binance_60', 5184000000);
- Refresh all the views for each exchange from first to last entry (example with the 900 seconds interval below):
CALL refresh_continuous_aggregate('candles_binance_900', NULL, NULL);
- Apply continuous aggregate policies to the views
SELECT add_continuous_aggregate_policy('candles_binance_900',
start_offset => 2700000,
end_offset => 900000,
schedule_interval => interval '15 minutes'
);
- Compress the views (also after 2 months)
ALTER MATERIALIZED VIEW candles_binance_900 SET (
timescaledb.compress = true
);
SELECT add_compression_policy('candles_binance_900', 5184000000);
This is the current setup we base our queries on.
We are having spiky hourly workloads that will run a few hundred queries in parallel to all different kind of views or the original hypertable (60 second interval) at exactly the same (hourly) time from different instances (K8s nodes).
Below are same of the queries on the views that take way longer than they should.
Query on caggs of candles_binance_900, very slow planning times (13 seconds), but fast execution time:
query:
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT *
FROM candles_binance_900
WHERE instrument = 'FILUSDT' and bucket_start between 1659783328000 and 1661580254000
ORDER BY bucket_start
LIMIT 30000
explain:
Limit (cost=1163.26..1242.98 rows=2007 width=92) (actual time=10.353..10.789 rows=1997 loops=1)
Output: _materialized_hypertable_29.bucket_start, _materialized_hypertable_29.instrument, _materialized_hypertable_29.open, _materialized_hypertable_29.high, _materialized_hypertable_29.low, _materialized_hypertable_29.close, _materialized_hypertable_29.volume, _materialized_hypertable_29.trades, _materialized_hypertable_29.buy_volume, _materialized_hypertable_29.sell_volume, _materialized_hypertable_29.buys, _materialized_hypertable_29.sells, _materialized_hypertable_29.vwap, _materialized_hypertable_29.missed
Buffers: shared hit=52 read=14
I/O Timings: read=8.976
-> Merge Append (cost=1163.26..1242.98 rows=2007 width=92) (actual time=10.352..10.662 rows=1997 loops=1)
Sort Key: _materialized_hypertable_29.bucket_start
Buffers: shared hit=52 read=14
I/O Timings: read=8.976
-> Sort (cost=1153.25..1158.21 rows=1985 width=92) (actual time=10.350..10.509 rows=1997 loops=1)
Output: _materialized_hypertable_29.bucket_start, _materialized_hypertable_29.instrument, _materialized_hypertable_29.open, _materialized_hypertable_29.high, _materialized_hypertable_29.low, _materialized_hypertable_29.close, _materialized_hypertable_29.volume, _materialized_hypertable_29.trades, _materialized_hypertable_29.buy_volume, _materialized_hypertable_29.sell_volume, _materialized_hypertable_29.buys, _materialized_hypertable_29.sells, _materialized_hypertable_29.vwap, _materialized_hypertable_29.missed
Sort Key: _materialized_hypertable_29.bucket_start
Sort Method: quicksort Memory: 500kB
Buffers: shared hit=52 read=14
I/O Timings: read=8.976
-> Custom Scan (ChunkAppend) on _timescaledb_internal._materialized_hypertable_29 (cost=0.43..1024.67 rows=1985 width=92) (actual time=0.875..9.905 rows=1997 loops=1)
Output: _materialized_hypertable_29.bucket_start, _materialized_hypertable_29.instrument, _materialized_hypertable_29.open, _materialized_hypertable_29.high, _materialized_hypertable_29.low, _materialized_hypertable_29.close, _materialized_hypertable_29.volume, _materialized_hypertable_29.trades, _materialized_hypertable_29.buy_volume, _materialized_hypertable_29.sell_volume, _materialized_hypertable_29.buys, _materialized_hypertable_29.sells, _materialized_hypertable_29.vwap, _materialized_hypertable_29.missed
Startup Exclusion: true
Runtime Exclusion: false
Chunks excluded during startup: 0
Buffers: shared hit=52 read=14
I/O Timings: read=8.976
-> Index Scan using _hyper_29_19961_chunk__materialized_hypertable_29_instrument_bu on _timescaledb_internal._hyper_29_19961_chunk (cost=0.43..471.21 rows=913 width=92) (actual time=0.874..2.664 rows=916 loops=1)
Output: _hyper_29_19961_chunk.bucket_start, _hyper_29_19961_chunk.instrument, _hyper_29_19961_chunk.open, _hyper_29_19961_chunk.high, _hyper_29_19961_chunk.low, _hyper_29_19961_chunk.close, _hyper_29_19961_chunk.volume, _hyper_29_19961_chunk.trades, _hyper_29_19961_chunk.buy_volume, _hyper_29_19961_chunk.sell_volume, _hyper_29_19961_chunk.buys, _hyper_29_19961_chunk.sells, _hyper_29_19961_chunk.vwap, _hyper_29_19961_chunk.missed
Index Cond: ((_hyper_29_19961_chunk.instrument = 'FILUSDT'::text) AND (_hyper_29_19961_chunk.bucket_start < COALESCE(_timescaledb_internal.cagg_watermark(29), '-9223372036854775808'::bigint)) AND (_hyper_29_19961_chunk.bucket_start >= '1659783328000'::bigint) AND (_hyper_29_19961_chunk.bucket_start <= '1661580254000'::bigint))
Buffers: shared hit=23 read=7
I/O Timings: read=2.398
-> Index Scan using _hyper_29_19962_chunk__materialized_hypertable_29_instrument_bu on _timescaledb_internal._hyper_29_19962_chunk (cost=0.43..489.49 rows=951 width=92) (actual time=0.052..0.762 rows=960 loops=1)
Output: _hyper_29_19962_chunk.bucket_start, _hyper_29_19962_chunk.instrument, _hyper_29_19962_chunk.open, _hyper_29_19962_chunk.high, _hyper_29_19962_chunk.low, _hyper_29_19962_chunk.close, _hyper_29_19962_chunk.volume, _hyper_29_19962_chunk.trades, _hyper_29_19962_chunk.buy_volume, _hyper_29_19962_chunk.sell_volume, _hyper_29_19962_chunk.buys, _hyper_29_19962_chunk.sells, _hyper_29_19962_chunk.vwap, _hyper_29_19962_chunk.missed
Index Cond: ((_hyper_29_19962_chunk.instrument = 'FILUSDT'::text) AND (_hyper_29_19962_chunk.bucket_start < COALESCE(_timescaledb_internal.cagg_watermark(29), '-9223372036854775808'::bigint)) AND (_hyper_29_19962_chunk.bucket_start >= '1659783328000'::bigint) AND (_hyper_29_19962_chunk.bucket_start <= '1661580254000'::bigint))
Buffers: shared hit=28 read=1
I/O Timings: read=0.304
-> Index Scan using _hyper_29_19963_chunk__materialized_hypertable_29_instrument_bu on _timescaledb_internal._hyper_29_19963_chunk (cost=0.43..63.97 rows=121 width=90) (actual time=2.347..6.344 rows=121 loops=1)
Output: _hyper_29_19963_chunk.bucket_start, _hyper_29_19963_chunk.instrument, _hyper_29_19963_chunk.open, _hyper_29_19963_chunk.high, _hyper_29_19963_chunk.low, _hyper_29_19963_chunk.close, _hyper_29_19963_chunk.volume, _hyper_29_19963_chunk.trades, _hyper_29_19963_chunk.buy_volume, _hyper_29_19963_chunk.sell_volume, _hyper_29_19963_chunk.buys, _hyper_29_19963_chunk.sells, _hyper_29_19963_chunk.vwap, _hyper_29_19963_chunk.missed
Index Cond: ((_hyper_29_19963_chunk.instrument = 'FILUSDT'::text) AND (_hyper_29_19963_chunk.bucket_start < COALESCE(_timescaledb_internal.cagg_watermark(29), '-9223372036854775808'::bigint)) AND (_hyper_29_19963_chunk.bucket_start >= '1659783328000'::bigint) AND (_hyper_29_19963_chunk.bucket_start <= '1661580254000'::bigint))
Buffers: shared hit=1 read=6
I/O Timings: read=6.275
-> GroupAggregate (cost=10.01..64.47 rows=22 width=298) (actual time=0.001..0.001 rows=0 loops=1)
Output: (time_bucket('900000'::bigint, candles_binance_60.start)), candles_binance_60.instrument, first(candles_binance_60.open, candles_binance_60.start), max(candles_binance_60.high), min(candles_binance_60.low), last(candles_binance_60.close, candles_binance_60.start), sum(candles_binance_60.volume), sum(candles_binance_60.trades), sum(candles_binance_60.buy_volume), sum(candles_binance_60.sell_volume), sum(candles_binance_60.buys), sum(candles_binance_60.sells), CASE WHEN (sum(candles_binance_60.volume) > '0'::numeric) THEN (sum((candles_binance_60.vwap * candles_binance_60.volume)) / sum(candles_binance_60.volume)) ELSE '0'::numeric END, max(candles_binance_60.missed)
Group Key: candles_binance_60.instrument, (time_bucket('900000'::bigint, candles_binance_60.start))
-> Custom Scan (ConstraintAwareAppend) (cost=10.01..62.93 rows=22 width=76) (actual time=0.000..0.001 rows=0 loops=1)
Output: (time_bucket('900000'::bigint, candles_binance_60.start)), candles_binance_60.instrument, candles_binance_60.open, candles_binance_60.start, candles_binance_60.high, candles_binance_60.low, candles_binance_60.close, candles_binance_60.volume, candles_binance_60.trades, candles_binance_60.buy_volume, candles_binance_60.sell_volume, candles_binance_60.buys, candles_binance_60.sells, candles_binance_60.vwap, candles_binance_60.missed
Hypertable: candles_binance_60
Chunks excluded during startup: 22
Query Identifier: 2417736485277390874
Planning:
Buffers: shared hit=84615
Planning Time: 13711.166 ms <-- Extrem planning times
Execution Time: 11.281 ms
Query on cagg candles_binance_300, same here:
query:
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT *
FROM candles_binance_300
WHERE instrument = 'FTTBNB' and bucket_start between 1659634675000 and 1661485921000
ORDER BY bucket_start
LIMIT 30000
explain:
Limit (cost=3805.25..3939.52 rows=6142 width=88) (actual time=146.554..147.687 rows=6171 loops=1)
Output: _materialized_hypertable_28.bucket_start, _materialized_hypertable_28.instrument, _materialized_hypertable_28.open, _materialized_hypertable_28.high, _materialized_hypertable_28.low, _materialized_hypertable_28.close, _materialized_hypertable_28.volume, _materialized_hypertable_28.trades, _materialized_hypertable_28.buy_volume, _materialized_hypertable_28.sell_volume, _materialized_hypertable_28.buys, _materialized_hypertable_28.sells, _materialized_hypertable_28.vwap, _materialized_hypertable_28.missed
Buffers: shared hit=16 read=136
I/O Timings: read=142.935
-> Merge Append (cost=3805.25..3939.52 rows=6142 width=88) (actual time=146.553..147.305 rows=6171 loops=1)
Sort Key: _materialized_hypertable_28.bucket_start
Buffers: shared hit=16 read=136
I/O Timings: read=142.935
-> Sort (cost=3794.77..3810.07 rows=6119 width=88) (actual time=146.551..146.912 rows=6171 loops=1)
Output: _materialized_hypertable_28.bucket_start, _materialized_hypertable_28.instrument, _materialized_hypertable_28.open, _materialized_hypertable_28.high, _materialized_hypertable_28.low, _materialized_hypertable_28.close, _materialized_hypertable_28.volume, _materialized_hypertable_28.trades, _materialized_hypertable_28.buy_volume, _materialized_hypertable_28.sell_volume, _materialized_hypertable_28.buys, _materialized_hypertable_28.sells, _materialized_hypertable_28.vwap, _materialized_hypertable_28.missed
Sort Key: _materialized_hypertable_28.bucket_start
Sort Method: quicksort Memory: 1101kB
Buffers: shared hit=16 read=136
I/O Timings: read=142.935
-> Custom Scan (ChunkAppend) on _timescaledb_internal._materialized_hypertable_28 (cost=0.43..3348.73 rows=6119 width=88) (actual time=1.170..145.389 rows=6171 loops=1)
Output: _materialized_hypertable_28.bucket_start, _materialized_hypertable_28.instrument, _materialized_hypertable_28.open, _materialized_hypertable_28.high, _materialized_hypertable_28.low, _materialized_hypertable_28.close, _materialized_hypertable_28.volume, _materialized_hypertable_28.trades, _materialized_hypertable_28.buy_volume, _materialized_hypertable_28.sell_volume, _materialized_hypertable_28.buys, _materialized_hypertable_28.sells, _materialized_hypertable_28.vwap, _materialized_hypertable_28.missed
Startup Exclusion: true
Runtime Exclusion: false
Chunks excluded during startup: 0
Buffers: shared hit=16 read=136
I/O Timings: read=142.935
-> Index Scan using _hyper_28_4352_chunk__materialized_hypertable_28_instrument_buc on _timescaledb_internal._hyper_28_4352_chunk (cost=0.43..395.60 rows=360 width=90) (actual time=1.169..6.455 rows=364 loops=1)
Output: _hyper_28_4352_chunk.bucket_start, _hyper_28_4352_chunk.instrument, _hyper_28_4352_chunk.open, _hyper_28_4352_chunk.high, _hyper_28_4352_chunk.low, _hyper_28_4352_chunk.close, _hyper_28_4352_chunk.volume, _hyper_28_4352_chunk.trades, _hyper_28_4352_chunk.buy_volume, _hyper_28_4352_chunk.sell_volume, _hyper_28_4352_chunk.buys, _hyper_28_4352_chunk.sells, _hyper_28_4352_chunk.vwap, _hyper_28_4352_chunk.missed
Index Cond: ((_hyper_28_4352_chunk.instrument = 'FTTBNB'::text) AND (_hyper_28_4352_chunk.bucket_start < COALESCE(_timescaledb_internal.cagg_watermark(28), '-9223372036854775808'::bigint)) AND (_hyper_28_4352_chunk.bucket_start >= '1659634675000'::bigint) AND (_hyper_28_4352_chunk.bucket_start <= '1661485921000'::bigint))
Buffers: shared hit=2 read=8
I/O Timings: read=6.269
-> Index Scan using _hyper_28_19958_chunk__materialized_hypertable_28_instrument_bu on _timescaledb_internal._hyper_28_19958_chunk (cost=0.43..1468.93 rows=2866 width=88) (actual time=16.071..97.686 rows=2880 loops=1)
Output: _hyper_28_19958_chunk.bucket_start, _hyper_28_19958_chunk.instrument, _hyper_28_19958_chunk.open, _hyper_28_19958_chunk.high, _hyper_28_19958_chunk.low, _hyper_28_19958_chunk.close, _hyper_28_19958_chunk.volume, _hyper_28_19958_chunk.trades, _hyper_28_19958_chunk.buy_volume, _hyper_28_19958_chunk.sell_volume, _hyper_28_19958_chunk.buys, _hyper_28_19958_chunk.sells, _hyper_28_19958_chunk.vwap, _hyper_28_19958_chunk.missed
Index Cond: ((_hyper_28_19958_chunk.instrument = 'FTTBNB'::text) AND (_hyper_28_19958_chunk.bucket_start < COALESCE(_timescaledb_internal.cagg_watermark(28), '-9223372036854775808'::bigint)) AND (_hyper_28_19958_chunk.bucket_start >= '1659634675000'::bigint) AND (_hyper_28_19958_chunk.bucket_start <= '1661485921000'::bigint))
Buffers: shared hit=7 read=61
I/O Timings: read=96.778
-> Index Scan using _hyper_28_19959_chunk__materialized_hypertable_28_instrument_bu on _timescaledb_internal._hyper_28_19959_chunk (cost=0.43..1458.25 rows=2846 width=87) (actual time=0.769..24.885 rows=2880 loops=1)
Output: _hyper_28_19959_chunk.bucket_start, _hyper_28_19959_chunk.instrument, _hyper_28_19959_chunk.open, _hyper_28_19959_chunk.high, _hyper_28_19959_chunk.low, _hyper_28_19959_chunk.close, _hyper_28_19959_chunk.volume, _hyper_28_19959_chunk.trades, _hyper_28_19959_chunk.buy_volume, _hyper_28_19959_chunk.sell_volume, _hyper_28_19959_chunk.buys, _hyper_28_19959_chunk.sells, _hyper_28_19959_chunk.vwap, _hyper_28_19959_chunk.missed
Index Cond: ((_hyper_28_19959_chunk.instrument = 'FTTBNB'::text) AND (_hyper_28_19959_chunk.bucket_start < COALESCE(_timescaledb_internal.cagg_watermark(28), '-9223372036854775808'::bigint)) AND (_hyper_28_19959_chunk.bucket_start >= '1659634675000'::bigint) AND (_hyper_28_19959_chunk.bucket_start <= '1661485921000'::bigint))
Buffers: shared hit=6 read=62
I/O Timings: read=24.087
-> Index Scan using _hyper_28_19960_chunk__materialized_hypertable_28_instrument_bu on _timescaledb_internal._hyper_28_19960_chunk (cost=0.43..25.95 rows=47 width=86) (actual time=14.770..15.874 rows=47 loops=1)
Output: _hyper_28_19960_chunk.bucket_start, _hyper_28_19960_chunk.instrument, _hyper_28_19960_chunk.open, _hyper_28_19960_chunk.high, _hyper_28_19960_chunk.low, _hyper_28_19960_chunk.close, _hyper_28_19960_chunk.volume, _hyper_28_19960_chunk.trades, _hyper_28_19960_chunk.buy_volume, _hyper_28_19960_chunk.sell_volume, _hyper_28_19960_chunk.buys, _hyper_28_19960_chunk.sells, _hyper_28_19960_chunk.vwap, _hyper_28_19960_chunk.missed
Index Cond: ((_hyper_28_19960_chunk.instrument = 'FTTBNB'::text) AND (_hyper_28_19960_chunk.bucket_start < COALESCE(_timescaledb_internal.cagg_watermark(28), '-9223372036854775808'::bigint)) AND (_hyper_28_19960_chunk.bucket_start >= '1659634675000'::bigint) AND (_hyper_28_19960_chunk.bucket_start <= '1661485921000'::bigint))
Buffers: shared hit=1 read=5
I/O Timings: read=15.802
-> GroupAggregate (cost=10.47..67.79 rows=23 width=298) (actual time=0.001..0.001 rows=0 loops=1)
Output: (time_bucket('300000'::bigint, candles_binance_60.start)), candles_binance_60.instrument, first(candles_binance_60.open, candles_binance_60.start), max(candles_binance_60.high), min(candles_binance_60.low), last(candles_binance_60.close, candles_binance_60.start), sum(candles_binance_60.volume), sum(candles_binance_60.trades), sum(candles_binance_60.buy_volume), sum(candles_binance_60.sell_volume), sum(candles_binance_60.buys), sum(candles_binance_60.sells), CASE WHEN (sum(candles_binance_60.volume) > '0'::numeric) THEN (sum((candles_binance_60.vwap * candles_binance_60.volume)) / sum(candles_binance_60.volume)) ELSE '0'::numeric END, max(candles_binance_60.missed)
Group Key: candles_binance_60.instrument, (time_bucket('300000'::bigint, candles_binance_60.start))
-> Custom Scan (ConstraintAwareAppend) (cost=10.47..66.18 rows=23 width=76) (actual time=0.000..0.000 rows=0 loops=1)
Output: (time_bucket('300000'::bigint, candles_binance_60.start)), candles_binance_60.instrument, candles_binance_60.open, candles_binance_60.start, candles_binance_60.high, candles_binance_60.low, candles_binance_60.close, candles_binance_60.volume, candles_binance_60.trades, candles_binance_60.buy_volume, candles_binance_60.sell_volume, candles_binance_60.buys, candles_binance_60.sells, candles_binance_60.vwap, candles_binance_60.missed
Hypertable: candles_binance_60
Chunks excluded during startup: 23
Query Identifier: 3475993531847608069
Planning:
Buffers: shared hit=85572
Planning Time: 13213.218 ms <-- Also here, extreme planning times
Execution Time: 148.355 m
Query on the underlying hypertable candles_binance_60, which has good planning times, even under load:
query:
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT *
FROM candles_binance_60
WHERE instrument = 'BURGERBUSD' and start between 1660549322000 and 1661450125000
ORDER BY start
LIMIT 30000
explain:
Limit (cost=0.43..7793.07 rows=15383 width=88) (actual time=9.958..154.989 rows=15013 loops=1)
Output: candles_binance_60.start, candles_binance_60.instrument, candles_binance_60.open, candles_binance_60.high, candles_binance_60.low, candles_binance_60.close, candles_binance_60.volume, candles_binance_60.trades, candles_binance_60.origin, candles_binance_60.buy_volume, candles_binance_60.sell_volume, candles_binance_60.buys, candles_binance_60.sells, candles_binance_60.high_time, candles_binance_60.low_time, candles_binance_60.vwap, candles_binance_60.status, candles_binance_60.missed
Buffers: shared hit=175 read=368
I/O Timings: read=147.297
-> Custom Scan (ChunkAppend) on public.candles_binance_60 (cost=0.43..7793.07 rows=15383 width=88) (actual time=9.957..153.906 rows=15013 loops=1)
Output: candles_binance_60.start, candles_binance_60.instrument, candles_binance_60.open, candles_binance_60.high, candles_binance_60.low, candles_binance_60.close, candles_binance_60.volume, candles_binance_60.trades, candles_binance_60.origin, candles_binance_60.buy_volume, candles_binance_60.sell_volume, candles_binance_60.buys, candles_binance_60.sells, candles_binance_60.high_time, candles_binance_60.low_time, candles_binance_60.vwap, candles_binance_60.status, candles_binance_60.missed
Order: candles_binance_60.start
Startup Exclusion: false
Runtime Exclusion: false
Buffers: shared hit=175 read=368
I/O Timings: read=147.297
-> Index Scan Backward using _hyper_1_19930_chunk_candles_binance_60_instrument_start_idx on _timescaledb_internal._hyper_1_19930_chunk (cost=0.43..512.13 rows=1004 width=88) (actual time=9.956..19.364 rows=977 loops=1)
Output: _hyper_1_19930_chunk.start, _hyper_1_19930_chunk.instrument, _hyper_1_19930_chunk.open, _hyper_1_19930_chunk.high, _hyper_1_19930_chunk.low, _hyper_1_19930_chunk.close, _hyper_1_19930_chunk.volume, _hyper_1_19930_chunk.trades, _hyper_1_19930_chunk.origin, _hyper_1_19930_chunk.buy_volume, _hyper_1_19930_chunk.sell_volume, _hyper_1_19930_chunk.buys, _hyper_1_19930_chunk.sells, _hyper_1_19930_chunk.high_time, _hyper_1_19930_chunk.low_time, _hyper_1_19930_chunk.vwap, _hyper_1_19930_chunk.status, _hyper_1_19930_chunk.missed
Index Cond: ((_hyper_1_19930_chunk.instrument = 'BURGERBUSD'::text) AND (_hyper_1_19930_chunk.start >= '1660549322000'::bigint) AND (_hyper_1_19930_chunk.start <= '1661450125000'::bigint))
Buffers: shared hit=11 read=22
I/O Timings: read=18.756
-> Index Scan Backward using _hyper_1_19931_chunk_candles_binance_60_instrument_start_idx on _timescaledb_internal._hyper_1_19931_chunk (cost=0.43..782.53 rows=1548 width=88) (actual time=1.564..71.195 rows=1440 loops=1)
Output: _hyper_1_19931_chunk.start, _hyper_1_19931_chunk.instrument, _hyper_1_19931_chunk.open, _hyper_1_19931_chunk.high, _hyper_1_19931_chunk.low, _hyper_1_19931_chunk.close, _hyper_1_19931_chunk.volume, _hyper_1_19931_chunk.trades, _hyper_1_19931_chunk.origin, _hyper_1_19931_chunk.buy_volume, _hyper_1_19931_chunk.sell_volume, _hyper_1_19931_chunk.buys, _hyper_1_19931_chunk.sells, _hyper_1_19931_chunk.high_time, _hyper_1_19931_chunk.low_time, _hyper_1_19931_chunk.vwap, _hyper_1_19931_chunk.status, _hyper_1_19931_chunk.missed
Index Cond: ((_hyper_1_19931_chunk.instrument = 'BURGERBUSD'::text) AND (_hyper_1_19931_chunk.start >= '1660549322000'::bigint) AND (_hyper_1_19931_chunk.start <= '1661450125000'::bigint))
Buffers: shared hit=16 read=37
I/O Timings: read=70.394
... More index scans (same as above)
Query Identifier: -2532260734889229550
Planning:
Buffers: shared hit=3240
Planning Time: 202.067 ms <-- Expected planning time
Execution Time: 155.841 ms
What i noticed a lot in the queries for the caggs is that Chunks excluded during startup: 0
is always showing 0
, but it should exclude most of the chunks. Is this correct? Because it seems timescale is spending most of the time going trough all the chunks even though they can be excluded.
I also found some older threads here that recommend changing your time_now_func to IMMUTABLE
instead of STABLE
, but so far I did not notice any change in performance.
I hope somebody could shine some light on the current issues we are facing with our timescale cloud setup.
Please let me know if you need additional infos!
Thanks