one single ODBC client (external virtual server) connects the Postgresql hypertable table_real_time structured as below.
And cyclically the sensor via ODBC send data to the hypertable almost 3000 datapoints every 10 seconds.
After some time, when you reach (almost) 1.5 million rows, Insert stops working.
I specify that I have not made any changes to the configurations yet (exception for shared_buffers = 2GB) and it may be necessary to do so, but here I’m asking where I have to start, some suggestions are welcome!
postgres=# \di
List of relations
Schema | Name | Type | Owner | Table
--------+----------------------------------+-------+----------+-------------------
public | table_real_time__timestamp_idx | index | postgres | table_real_time
public | ix_name_time | index | postgres | table_real_time
postgres=# \diS+
List of relations
Schema | Name | Type | Owner | Table | Persistence | Access method | Size | Description
------------+------------------------------------------------+-------+----------+--------------------------+-------------+---------------+------------+-------------
public | table_real_time__timestamp_idx | index | postgres | table_real_time | permanent | btree | 8192 bytes |
public | ix_name_time | index | postgres | table_real_time | permanent | btree | 8192 bytes |
CREATE TABLE IF NOT EXISTS public.table_real_time
(
_name text COLLATE pg_catalog."default" NOT NULL,
_numericid integer,
_value double precision,
_timestamp timestamp with time zone NOT NULL,
_quality integer
)
TABLESPACE pg_default;
ALTER TABLE IF EXISTS public.table_real_time
OWNER to postgres;
CREATE INDEX IF NOT EXISTS table_real_time__timestamp_idx
ON public.table_real_time USING btree
(_timestamp DESC NULLS FIRST)
TABLESPACE pg_default;
CREATE INDEX IF NOT EXISTS ix_name_time
ON public.table_real_time USING btree
(_name COLLATE pg_catalog."default" ASC NULLS LAST, _timestamp DESC NULLS FIRST)
TABLESPACE pg_default;
CREATE OR REPLACE TRIGGER ts_cagg_invalidation_trigger
AFTER INSERT OR DELETE OR UPDATE
ON public.table_real_time
FOR EACH ROW
EXECUTE FUNCTION _timescaledb_functions.continuous_agg_invalidation_trigger('2');
CREATE OR REPLACE TRIGGER ts_insert_blocker
BEFORE INSERT
ON public.table_real_time
FOR EACH ROW
EXECUTE FUNCTION _timescaledb_functions.insert_blocker();
It says that the data should be dropped after 1 hour. But if you look the time interval it’s 7 days of data. It’s not consistent. Because you’d need to have chunks of 1 hour or less to have it deleted.
Maybe you configured by a mistake? Do you have the intention to drop the data? If not, just delete_job:
Thank’s a lot (I’m a beginner, i deleted it now), from this I understood that my table is able to store more then 1.5 Millions of rows, and I have to address my check on the ODBC driver or Windows application that run it.
Does exist a view or similar point where I can understand the Select/Insert/Updated submitted to the database engine (like Profiler in MS SQL)?