Aug 02, 2024
Posted by
Bob Boule
Many of our users implement an architecture that uses AWS services and add TimescaleDB to their stack to manage, store, and analyze all their time-series data, events, and analytics at scale. Fortunately, since TimescaleDB seamlessly integrates with many AWS offerings, there are several ways to create a flexible stack that works for you.
In this post, we will discuss four different ways to set up your architecture with AWS and TimescaleDB and provide recommendations to help you choose the best option for your use case.
The first option that comes to mind is Timescale’s managed service offering, Timescale Cloud. This path allows you to host your TimescaleDB instance on AWS and use Virtual Private Cloud (VPC) peering to connect the instance to the rest of your AWS infrastructure.
The primary benefit to the managed service is that you can be hands-off in terms of the day-to-day management of the system since we manage updates and upgrades—along with backups and high availability (HA). With this route, you also get the flexibility to customize compute and storage configurations based on your needs and grow, shrink, or migrate your workloads with just a few clicks.
Here is an overview of what this setup would look like:
If you are interested in running TimescaleDB as a managed service on AWS, this is the option to explore. This configuration will offer you the benefits of the managed service while running on the AWS platform and allow you to add it to the rest of your cloud stack through VPC Peering.
If you’re looking for more granular control over your instance and/or more granular control over how the instance runs, you can spin up your own EC2 profile and install TimescaleDB within a tailored environment.
When you select this option, you gain operational control over your instance but assume a higher level of operational responsibility for ongoing maintenance than a managed cloud offering.
The third option is to deploy Timescale via Kubernetes, using the AWS Elastic Kubernetes Service (EKS). In the past, Timescale maintained Helm charts to manage the Kubernetes deployment, but we now recommend that Kubernetes users rely on one of the amazing PostgreSQL Kubernetes operators to simplify installation, configuration, and lifecycle.
Here is an overview of what this setup would look like:
This option gives you the ability to deploy TimescaleDB as a cloud-native application, adding the time-series database functionality to your microservices deployment.
To add to the use case above, a lot of our users combine on-premises or private cloud resources and AWS cloud resources. So, let's talk about monitoring data consolidation.
For example, suppose we collect metrics from Prometheus directly from our on-premises assets, leaving us with the question: how can I correlate that data with my cloud-based monitoring data?
The answer is to use an AWS CloudWatch Subscription Filter to send events directly to an AWS Lambda function, which then writes the event to our Timescale instance (included in a previous example).
Here is an overview of what this setup would look like:
We’re now storing and analyzing log events, metrics, and traces from on-premises and cloud.
For another example of combining AWS CloudWatch and Timescale, see Monitoring Your Timescale Services With Amazon CloudWatch.
If you’re looking for more monitoring options with Timescale, you can now do audit logging without leaving your cloud database service. Timescale Cloud has the PgAudit PostgreSQL extension available by default to all its customers. And to dig into the performance of your database queries, you can inspect what’s going on under the hood with Insights.
While this is not a complete list of ways you can use TimescaleDB and AWS services, we’ve covered a majority of common use cases (and their high-level implementations) to help you navigate your options.
Brand new to Timescale? Sign up for a Timescale account or view all available installation options here.
As always, we encourage you to join our Community Slack channel to chat with the team, ask questions, and see what others are working on.