Feb 19, 2024
Posted by
Avthar Sewrathan
We hosted Time-Series for Developers, a 30-minute technical session focused on laying foundational concepts and showing developers how to apply time-series thinking to their day-to-day work. And we’ve just published the recording and slides, so everyone can catch up on the session (or rewatch as you start new projects) and explore related resources.
If you’re new to time-series data, you’ll learn (a) what time-series data is in practical terms and (b) the types of questions you can ask and answer with it. If you’re experienced with time-series or TimescaleDB, you’ll (a) brush up on your query skills and (b) learn a few common mistakes to avoid (like making sure you use the right date format—international vs. US standard).
We focus on practical definitions, not theory—using examples to demonstrate how time-series data is all around us, from package delivery to movie production to money transfer apps.
In my time working at Timescale, I’ve seen all of the above use cases and more, and two examples stand out to me:
You’ll see how developers use time-series data in their projects every day and how it has two key differentiators.
Viewing time as the primary axis and collecting all data points for a system enables you to analyze the past, monitor the present, and plan for the future.
To demonstrate just how powerful time-series analysis can be, we spend 15-20 mins walking through a mock scenario, where we’re tasked with analyzing NYC taxicab data to find ways to cut carbon emissions, suggest routes to travelers, and more.
You’ll learn how to use pgAdmin, three simple—yet powerful—queries, and how to JOIN time series and relational data, all while we analyze a real dataset and answer questions.
We start one part of the NYC Taxi mission, but there are two others! They dive into more advanced queries and questions, like using geospatial information to enhance your analysis, and special TimescaleDB functions, like time_bucket
to simplify complex SQL queries.
We received questions throughout the session (thank you to everyone who submitted one!), with a selection below:
That depends on what kind of data you’re working with and the read-and-write patterns you’re expecting. I’d recommend reading our hypertable documentation—it’ll help you understand how to configure your hypertables for your scenario.
If you’d like to provide more details or get more help, message us in our Community Slack and we can chat more about your use case.
You can do either of those things. Batch updating from failed inserts is fine and shouldn’t have too much of an impact on performance. The performance hit comes when you’re commonly inserting data out of order. Doing that infrequently via a batch update should be fine.
In general, high-fidelity data regarding what’s happening in your body is a huge area of opportunity.
Personally, I think things like wearables that continuously monitor different things will be super impactful—for example, things like continuous glucose monitoring have the potential to have a much larger impact on diabetes patients than the regular one-time-a-day prick system.
To learn about future sessions and get updates about new content, releases, and other technical content, subscribe to our Biweekly Newsletter.
...and, for handy reference, here are the links to the recording and slides once more.
We hope to see you at the next one!