I am experiencing a very slow return time of result in Timescale. The query is shown below:
SELECT
tagpath AS metric,
floatvalue AS value,
to_timestamp(t_stamp / 1000) AS time
FROM
sqlth_1_data a
JOIN
sqlth_te b ON a.tagid = b.id
WHERE
(t_stamp BETWEEN
(EXTRACT(EPOCH FROM TO_TIMESTAMP('2024-04-23T09:10:47.689Z', 'YYYY-MM-DD"T"HH24:MI:SS"Z"')) * 1000)::bigint
AND
(EXTRACT(EPOCH FROM TO_TIMESTAMP('2024-04-24T09:10:47.689Z', 'YYYY-MM-DD"T"HH24:MI:SS"Z"')) * 1000)::bigint)
AND tagpath IN ('1 sec/tsdb_bcs_01_100_new/pv','1 sec/tsdb_bcs_01_101_new/pv')
ORDER BY
t_stamp ASC;type or paste code here
The chunk size is 1 DAY. The return time using DBEAVER is about 1min, 20sec. Though faster in the 2nd attempt.
When I analyzed the query:
|QUERY PLAN |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|Sort (cost=76710.30..76832.01 rows=48683 width=54) (actual time=933.078..1001.464 rows=341450 loops=1) |
| Sort Key: a.t_stamp |
| Sort Method: external merge Disk: 22080kB |
| -> Nested Loop (cost=0.60..72920.05 rows=48683 width=54) (actual time=0.119..751.905 rows=341450 loops=1) |
| -> Index Scan using sqlth_tetagpathndx on sqlth_te b (cost=0.28..4.88 rows=2 width=34) (actual time=0.044..0.066 rows=2 loops=1) |
| Index Cond: ((tagpath)::text = ANY ('{"1 sec/tsdb_bcs_01_100_new/pv","1 sec/tsdb_bcs_01_101_new/pv"}'::text[])) |
| -> Custom Scan (ChunkAppend) on sqlth_1_data a (cost=0.32..35959.41 rows=31562 width=20) (actual time=0.037..343.731 rows=170725 loops=2) |
| Chunks excluded during startup: 791 |
| -> Index Scan using "792_792_sqlth_1_data_pkey" on _hyper_1_792_chunk a_1 (cost=0.59..11364.33 rows=10280 width=20) (actual time=0.036..116.669 rows=59144 loops=2) |
| Index Cond: ((tagid = b.id) AND (t_stamp >= ((EXTRACT(epoch FROM to_timestamp('2024-04-22T09:22:13.605Z'::text, 'YYYY-MM-DD"T"HH24:MI:SS"Z"'::text)) * '1000'::numeric))::bigint) AND (t_stamp <= ((EXTRACT(epoch FROM to_timestamp('2024-04-24T09:22:13.605Z'::text, 'YYYY-MM-DD"T"HH24:MI:SS"Z"'::text)) * '1000'::numeric))::bigint))|
| -> Index Scan using "793_793_sqlth_1_data_pkey" on _hyper_1_793_chunk a_2 (cost=0.59..16620.33 rows=15173 width=20) (actual time=0.046..163.522 rows=85371 loops=2) |
| Index Cond: ((tagid = b.id) AND (t_stamp >= ((EXTRACT(epoch FROM to_timestamp('2024-04-22T09:22:13.605Z'::text, 'YYYY-MM-DD"T"HH24:MI:SS"Z"'::text)) * '1000'::numeric))::bigint) AND (t_stamp <= ((EXTRACT(epoch FROM to_timestamp('2024-04-24T09:22:13.605Z'::text, 'YYYY-MM-DD"T"HH24:MI:SS"Z"'::text)) * '1000'::numeric))::bigint))|
| -> Index Scan using "794_794_sqlth_1_data_pkey" on _hyper_1_794_chunk a_3 (cost=0.58..5365.22 rows=4870 width=20) (actual time=0.058..46.746 rows=26210 loops=2) |
| Index Cond: ((tagid = b.id) AND (t_stamp >= ((EXTRACT(epoch FROM to_timestamp('2024-04-22T09:22:13.605Z'::text, 'YYYY-MM-DD"T"HH24:MI:SS"Z"'::text)) * '1000'::numeric))::bigint) AND (t_stamp <= ((EXTRACT(epoch FROM to_timestamp('2024-04-24T09:22:13.605Z'::text, 'YYYY-MM-DD"T"HH24:MI:SS"Z"'::text)) * '1000'::numeric))::bigint))|
|Planning Time: 462.464 ms |
|Execution Time: 1082.559 ms |
what am i doing wrong? During execution, I still had a lot of free RAM, therefore I doubt any physical resource bottleneck.