The Timescale Toolkit team is proud to announce the 1.12.1 version of the TimescaleDB Toolkit extension! This version is now available on Timescale Cloud, as part of our HA docker image, as a DEB or RPM package, or as source code at our GitHub repository.
Welcome, Candlestick Hyperfunctions!
With Toolkit 1.12.1, you can now use either the toolkit_experimental.candlestick_agg(timestamp, price, volume)
aggregate or the toolkit_experimental.candlestick(timestamp, open, high, low, close, volume)
function, depending on whether you are working with tick data or aggregated data.
Both the aggregate and the function forms of candlestick support the following (experimental) accessors (in addition to being re-aggregated via rollup): open
, high
, low
, close
, open_time
, high_time
, low_time
, close_time
, volume
, vwap
(Volume Weighted Average Price).
NOTE: This functionality improves upon and replaces toolkit_experimental.ohlc()
, which will be removed in the next release.
Developer Benefits of Candlestick Hyperfunctions
The new candlestick aggregate hyperfunction introduces a more developer-friendly SQL interface to analyze financial data. Instead of using general functions like MIN()/MAX()/FIRST()/LAST()/COUNT()
, developers can now use a single function that is specifically made for financial analysis.
The new hyperfunction also integrates with TimescaleDB’s continuous aggregates, which means it is now easier to automatically roll up your price data into different time buckets.
New Minimum and Maximum Hyperfunctions
Besides candlestick aggregates, we are introducing minimum and maximum hyperfunctions, i.e., new min_n
/max_n
hyperfunctions and related min_n_by
/max_n_by
.
You can use the min_n
/max_n
hyperfunctions to get the N largest or smallest values from a column, while the min_n_by
/max_n_by
hyperfunctions behave as the min_n
/max_n
, but will also return some associated data for the smallest or largest elements, such as another column or even the entire row.
These hyperfunctions should give the same results as a SELECT ... ORDER BY ... LIMIT n
, except they can be composed and combined like other toolkit aggregates.
Developer Benefits of Minimum and Maximum Hyperfunctions
The advantages of min_n
and max_n
over simple PostgreSQL queries is that min_n
and max_n
aggregates can be used in a continuous aggregate to be kept up to date as new data is added without needing to be fully recomputed. They are also very efficient to combine via the rollup operation, allowing users to do things, such as generate a monthly list of maximums from their daily aggregates, with minimal added computation. min_n_by
and max_n_by
are also very flexible in allowing users to associate extra data with their value.
New Experimental Features
Bug Fixes
-
#568: Allow
approx_count
accessor function to take NULL inputs. - #574: Add default unit to interpolated_integral.
- #624: Remove partial aggregation for candlestick aggregates.
Other Notable Changes
- Reintroduction of RPM packages for CentOS.
- New Homebrew formulae available for macOS installation:
brew install timescale/tap/timescaledb-toolkit
. - #547: Update pgx to 0.5.0. This is necessary to add the upcoming Postgres 15 support.
- #571: Update CI docker image for pgx 0.5.0.
-
#599: Reduce floating point error when using
stats_agg
in moving aggregate mode. - #589: Update pgx to 0.5.4.
-
#594: Verify that pgx doesn’t generate
CREATE OR REPLACE FUNCTION
. - #592: Add build script option to install in release mode.
Conclusion
We encourage everyone to try out this new version of the toolkit and let us know what you think! We’d love to hear any improvements or adjustments you’d like to make to the experimental functions (even if you like them the way they are and would like to see them outside of our experimental schema). You can reply to this post, send us a note in our Slack channel, or add an issue to the toolkit repository. Thank you!