All,
I’m not a power user, simply using a database (currently influx1.8) for archival of smarthome data.
So I’m a bit surprised that after reading about every article and forum post on timescaledb and downsampling, I haven’t found what I’m looking for.
I want to gradually downsample data over time, as probably a lot of people do: real-time data for a day, 10-minute-averages for a week, 30-minute-averages for 30 days, and so on.
I learned that for this I need:
- materialized views for every timeframe (let’s call them 1d_view, 7d_view etc)
- continuous aggregates on top of continuous aggregates, aggregating data off the previous one
- retention policies for the raw data and the materialized views (1d_view, 7d_view)
and I can then query those materialized views for the data I want. Which is pretty much the same way I currently do it with influxdb (continuous queries and retention policies).
Now I found that my smarthome system (openhab) can insert data into a postgresql/timescale database (and telegraf can, too). The way that works is that every measurement (temperature, humidity, etc.) is stored as it’s own table, with time/value pairs as the rows (currently the same with influx).
Every example I have found (here, for instance for continuous aggregates from a single table (FROM tablename).
How can I create an aggregate that runs on every table and on every value in that table? Can I use a wildcard in the query such as “FROM %”?
Especially since openhab creates a table with time/value and telegraf creates more complex tables with time/value1/value2/etc. It would have to be some aggregate that would include new tables, for example, when a new measurement is created.
Like I said, I’m surprised I couldn’t find an example for this scenario, which I assume is rather common.
Thanks!