In brief:
My continuous aggregate can successfully pull data in when calling the refresh_continuous_aggregate
function directly, but when run within the job system (either scheduled or via call_job
), the job errors out with a “timestamp out of range” SQL error.
TimescaleDB version: 2.11.0
PostgreSQL version: 15.3
Other software:
OS: Ubuntu 20.04
Platform:
Install method: Apt
Environment: Production
Again, running CALL refresh_continuous_aggregate('readings_fifteen', NULL, INTERVAL '30 minutes');
works fine, just when trying to run within the job system do I run into problems.
Since the error seems to be related to the job system, I’ve focused on how I specify the intervals when adding the cagg policy. I’ve tried a number of variations:
SELECT add_continuous_aggregate_policy('readings_fifteen', start_offset => NULL, end_offset => INTERVAL '30 min', schedule_interval => INTERVAL '30m');
SELECT add_continuous_aggregate_policy('readings_fifteen', start_offset => NULL, end_offset => '30 min', schedule_interval => '30m');
SELECT add_continuous_aggregate_policy('readings_fifteen', start_offset => NULL, end_offset => '00:30:00'::interval, schedule_interval => '00:15:00'::interval);
Here’s the error when using call job:
db_name=> SET client_min_messages TO DEBUG1;
SET
db_name=> call run_job(1008);
DEBUG: rehashing catalog cache id 35 for pg_namespace; 9 tups, 4 buckets
LINE 1: call run_job(1008);
^
DEBUG: Executing policy_refresh_continuous_aggregate with parameters {"end_offset": "00:30:00", "start_offset": null, "mat_hypertable_id": 8}
ERROR: timestamp out of range
The entries in timescaledb.job_errors have similar info:
sqlerrorcode: 22008
err_message: timestamp out of range
The start and finish times of the jobs are within a couple of ms, so it doesn’t seem to be a problem with data (or at least not very far into the data).
Lastly, here is the config pulled from the jobs table: {"end_offset": "00:30:00", "start_offset": null, "mat_hypertable_id": 8}