Stop inserting value to timescaledb after 1.5 Millions of rows

Hi all,
I implemented a simple architecture based where a sensor send continuous values to the Timescaledb:

Windows Server 2022 Standard (virtual server)
Intel(R) Core™ i5-5250U CPU @ 1.60GHz (2 processors)
RAM 8Gbyte

Postgresql postgresql-16.4-1
Timescaledb timescaledb-2.16.1
ODBC driver psqlodbc-13-01

one single ODBC client (external virtual server) connects the Postgresql hypertable table_real_time structured as below.

And cyclically the sensor via ODBC send data to the hypertable almost 3000 datapoints every 10 seconds.

After some time, when you reach (almost) 1.5 million rows, Insert stops working.
I specify that I have not made any changes to the configurations yet (exception for shared_buffers = 2GB) and it may be necessary to do so, but here I’m asking where I have to start, some suggestions are welcome!

postgres=# \di
                                List of relations
 Schema |               Name               | Type  |  Owner   |       Table
--------+----------------------------------+-------+----------+-------------------
 public | table_real_time__timestamp_idx   | index | postgres | table_real_time
 public | ix_name_time                     | index | postgres | table_real_time
 
postgres=# \diS+
                                                                         List of relations
   Schema   |                      Name                      | Type  |  Owner   |          Table           | Persistence | Access method |    Size    | Description
------------+------------------------------------------------+-------+----------+--------------------------+-------------+---------------+------------+-------------
 public     | table_real_time__timestamp_idx                 | index | postgres | table_real_time          | permanent   | btree         | 8192 bytes |
 public     | ix_name_time                                   | index | postgres | table_real_time          | permanent   | btree         | 8192 bytes |
CREATE TABLE IF NOT EXISTS public.table_real_time
(
    _name text COLLATE pg_catalog."default" NOT NULL,
    _numericid integer,
    _value double precision,
    _timestamp timestamp with time zone NOT NULL,
    _quality integer
)

TABLESPACE pg_default;

ALTER TABLE IF EXISTS public.table_real_time
    OWNER to postgres;


CREATE INDEX IF NOT EXISTS table_real_time__timestamp_idx
    ON public.table_real_time USING btree
    (_timestamp DESC NULLS FIRST)
    TABLESPACE pg_default;


CREATE INDEX IF NOT EXISTS ix_name_time
    ON public.table_real_time USING btree
    (_name COLLATE pg_catalog."default" ASC NULLS LAST, _timestamp DESC NULLS FIRST)
    TABLESPACE pg_default;


CREATE OR REPLACE TRIGGER ts_cagg_invalidation_trigger
    AFTER INSERT OR DELETE OR UPDATE 
    ON public.table_real_time
    FOR EACH ROW
    EXECUTE FUNCTION _timescaledb_functions.continuous_agg_invalidation_trigger('2');


CREATE OR REPLACE TRIGGER ts_insert_blocker
    BEFORE INSERT
    ON public.table_real_time
    FOR EACH ROW
    EXECUTE FUNCTION _timescaledb_functions.insert_blocker();

Best Regards
Alberto_p

Hi Albert, welcome!

Have you added any retention policy? can you share the hypertable params you used for this table?

Hello!
no policy was applied, and after I try set the one below, but didn’t resolve yet. Do you have an idea of what could be the issue on my setup?


postgres=# 
  hypertable_name  | job_id |                     config                     | schedule_interval | job_status | last_run_status |      last_run_started_at      |          next_start           | total_runs | total_successes | total_failures
-------------------+--------+------------------------------------------------+-------------------+------------+-----------------+-------------------------------+-------------------------------+------------+-----------------+----------------
 table_real_time |   1005 | {"drop_after": "01:00:00", "hypertable_id": 2} | 01:00:00          | Scheduled  | Success         | 2024-09-03 00:44:03.274903-07 | 2024-09-03 01:44:03.354234-07 |          2 |               2 |              0
(1 row)

Best Regards and thanks in advance
Alberto_p

more details…

postgres=# SELECT *  FROM timescaledb_information.dimensions   WHERE hypertable_name = 'table_real_time';
 hypertable_schema |  hypertable_name  | dimension_number | column_name |       column_type        | dimension_type | time_interval | integer_interval | integer_now_func | num_partitions
-------------------+-------------------+------------------+-------------+--------------------------+----------------+---------------+------------------+------------------+----------------
 public            | table_real_time   |                1 | _timestamp  | timestamp with time zone | Time           | 7 days        |                  |                  |
(1 row)

postgres=# SELECT * FROM timescaledb_information.chunks WHERE hypertable_name = 'table_real_time';
 hypertable_schema |  hypertable_name  |     chunk_schema      |    chunk_name    | primary_dimension |  primary_dimension_type  |      range_start       |       range_end        | range_start_integer | range_end_integer | is_compressed | chunk_tablespace |      chunk_creation_time
-------------------+-------------------+-----------------------+------------------+-------------------+--------------------------+------------------------+------------------------+---------------------+-------------------+---------------+------------------+-------------------------------
 public            | table_real_time   | _timescaledb_internal | _hyper_2_9_chunk | _timestamp        | timestamp with time zone | 2024-08-28 17:00:00-07 | 2024-09-04 17:00:00-07 |                     |                   | f             |                  | 2024-08-28 23:13:44.485481-07

Hi Alberto,

Look the drop_after config.

tsdb=> select '01:00:00'::interval = '1 hour'::interval;
 ?column?
----------
 t
(1 row)

It says that the data should be dropped after 1 hour. But if you look the time interval it’s 7 days of data. It’s not consistent. Because you’d need to have chunks of 1 hour or less to have it deleted.

Maybe you configured by a mistake? Do you have the intention to drop the data? If not, just delete_job:

select delete_job(1005);

Thank’s a lot (I’m a beginner, i deleted it now), from this I understood that my table is able to store more then 1.5 Millions of rows, and I have to address my check on the ODBC driver or Windows application that run it.

Does exist a view or similar point where I can understand the Select/Insert/Updated submitted to the database engine (like Profiler in MS SQL)?

Thanks
Best Regards

Hi Alberto, feel welcome!

I’m not familiar with the ODBC driver but generally you can extract a log or even go on your server logs and see what’s going on.

If you use like a framework/programming language to access it, generally you can also make them more verbose to have access to this.

In postgres, we also have the extension pg_stat_statements which will also track the performance of every query.