Hey,
We’ve started a process that basically runs queries throughout the entire hypertable, stepping e.g. 10 days of data at a time.
This has been done in the past, and it was fast enough.
Now for some reason it is horribly slow for a query, but if we re-run the same query later it is lightening fast.
What process(es) can I run to “prepare the whole table” for this process?
For instance, this is such a query:
EXPLAIN
ANALYZE
SELECT
"sensorId",
"timestamp",
"value"
FROM
"SensorDataPoint"
WHERE
"aggregationMethodId" = 0
AND "sensorId" IN (13876, 13877, 19307, 10808, 10810, 10812)
AND "timestamp" >= '2025-02-01T11:24:00.000Z'
AND "timestamp" < '2025-02-10T11:24:00.000Z'
And the results are:
"QUERY PLAN"
"Append (cost=0.56..61693.51 rows=69829 width=20) (actual time=67.876..115385.668 rows=38880 loops=1)"
" -> Index Scan using ""_hyper_1_2146_chunk_SensorDataPoint_aggregationMethodId_sensorI"" on _hyper_1_2146_chunk (cost=0.56..31196.45 rows=35503 width=20) (actual time=67.875..57935.914 rows=19548 loops=1)"
" Index Cond: ((""aggregationMethodId"" = 0) AND (""sensorId"" = ANY ('{13876,13877,19307,10808,10810,10812}'::integer[])) AND (""timestamp"" >= '2025-02-01 11:24:00+00'::timestamp with time zone) AND (""timestamp"" < '2025-02-10 11:24:00+00'::timestamp with time zone))"
" -> Index Scan using ""_hyper_1_2149_chunk_SensorDataPoint_aggregationMethodId_sensorI"" on _hyper_1_2149_chunk (cost=0.56..30147.91 rows=34326 width=20) (actual time=207.632..57444.029 rows=19332 loops=1)"
" Index Cond: ((""aggregationMethodId"" = 0) AND (""sensorId"" = ANY ('{13876,13877,19307,10808,10810,10812}'::integer[])) AND (""timestamp"" >= '2025-02-01 11:24:00+00'::timestamp with time zone) AND (""timestamp"" < '2025-02-10 11:24:00+00'::timestamp with time zone))"
"Planning Time: 0.739 ms"
"Execution Time: 115389.752 ms"
Please, help!