CALL run_job(3047) - window [ -9223372036836000000, 1723464000000 ]

Hello, when calling run_job(3047) on one of the continuous aggregate jobs we currently see the output below

CALL run_job(3047);
DEBUG: Executing policy_refresh_continuous_aggregate with parameters {“end_offset”: 86400000, “start_offset”: null, “mat_hypertable_id”: 41}
LOG: refreshing continuous aggregate “cagg_d3667f50abba11edab06b388bb6129ef_5086_avg_36000000” in window [ -9223372036836000000, 1723464000000 ]
DEBUG: hypertable 1 existing watermark >= new invalidation threshold 161677555200000 1723464000000
DEBUG: invalidation refresh on “cagg_d3667f50abba11edab06b388bb6129ef_5086_avg_36000000” in window [ 1722456000000, 1723464000000 ]

We have three questions

  1. Where is the timestamp -9223372036836000000 coming from in the output - LOG: refreshing continuous aggregate “cagg_d3667f50abba11edab06b388bb6129ef_5086_avg_36000000” in window [ -9223372036836000000, 1723464000000 ] as we were expecting this to be a calculated timestamp based on the watermark.
  2. Has CALL run_job(3047) completed successfully even with the reference to the timestamp -9223372036836000000.
  3. The jobs in timescaledb_information.jobs last_run_duration are showing 00:00:35.069811 which would seem quite slow. Could this be in relation to the timestamp -9223372036836000000 in that each time the job runs the average is being calculated from the very first data point of the devices timestamp.

Would greatly appreciate some advice regarding what seems to be an incorrect start_offset of -9223372036836000000

Many thanks.

Welcome Ian!

For the negative timestamp, is that it’s in the very early minimal possible date as a starting point because you never ran the refresh.

  1. Have you checked this options? Timescale Documentation | Troubleshooting continuous aggregates
  2. Looks like yes.
  3. Yes, it can be the case. Have you checked if your query contain all indices and is optimized for the action? You can also reduce the start_offset to get a narrow time window and process less data depending the type of data you have.

Can you double check any errors on timescaledb_information.job_errors? You can also check job_history in the same schema to see how is it performing in general.