Continous aggregates in microseconds

Hello,

i was trying to ingest my dataset to a hypertable directly from a csv file and I noticed that values in timestamp column have been transferred in milliseconds precision although it has microseconds precision.
I tried then inserting values in microseconds to the table with an INSERT INTO statement, and it also failed displaying 6 decimals.
In forum and my research in internet it supposed to have a precision of microseconds.
Does anyone know, why this happens?
Is there any other ways to ingest my data without loosing information (like multiplicating it with 10^6 and using data format BIG INTEGERS), so that i can use continous aggregate functionalities?

Thanks for your attention

Hi IFT, Postgres’ native datatype is in microseconds.

Timescale has “native” support for bigint/integer types, it’s just not via Postgres’ native timestamp/timestamptz datatype.

You can use your custom column using micro seconds and then use time_partitioning_func.