Launch Weeks Review: Start, Scale, Stay With Postgres

It’s been an exciting and ambitious time at Timescale: the past three straight weeks have back-to-back daily launches. From Cloud Week to Dynamic Infrastructure Week and, finally, to Scaling Postgres Week, we’ve been on an adventure demonstrating how Timescale simplifies, scales, and supercharges your database experience.

But this wasn’t about a set of random features. This was about realizing our vision:

Timescale enables developers to start on Postgres, scale with Postgres, stay with Postgres.

Let’s back up. Today, PostgreSQL is the most popular, fastest-growing database among professional developers. It has its roots as a rock-solid workhorse for transactional (OLTP) workloads. It has proved its mettle over decades through millions of production deployments.

Yet when a new workload arises, we’ve seen time and time again that new specialized databases are created to support it. For example, InfluxDB for time series, Clickhouse for analytics, or Pinecone for AI vector embeddings today. 

Specialized databases often seem easier to start but quickly introduce problems: less reliability, lack of long-tail features, smaller community, narrower ecosystem, operational complexity, data silos, etc. Developers are faced with a choice: build on top of one generalized database or a complex stack of several specialized databases.

Over the past six years, Timescale has shown that developers can skip the specialization and get the best of both worlds from PostgreSQL when it’s specialized for use cases. 

Specializing and supercharging PostgreSQL for specific use cases is why hundreds of thousands of developers have already chosen us today. 

We do this in two ways, based on what makes sense from both an architectural and product perspective. First, we build capabilities via extensions “inside” the PostgreSQL software: timescaledb for scale and performance, timescaledb-toolkit for analytical functions, timescaledb_osm for bottomless tiered storage, timescale_vector for high-performance AI embeddings. Second, via cloud services “around” the PostgreSQL software for both dynamic infrastructure, advanced operations and reliability, bottomless tiered storage, observability insights, and more.

We’re on a mission to be the go-to database for any use case that requires the best of Postgres—not just time series and analytics. The best of Postgres includes the most scalable, cost-effective cloud experience, the best developer tools for running databases in production, and the greatest capabilities to grow with you as your service expands in popularity and your needs evolve.

Start with dynamic cloud infrastructure, which allows you to scale your compute independently while only paying for the storage you use. Then, add powerful capabilities that scale with your needs, whether the event logs of your gaming software, the orders and the audit logs in your e-commerce application, the sensor data from your manufacturing and energy use cases, or the vector embeddings for your AI application as you ingest ever more data.

And we don’t just talk this talk; it’s how we build our own services. For example, our new database observability offering, Timescale Insights, which provides more granular visibility into your database and query performance than ever before, is built on a standard Timescale service. It serves customer-facing APIs and dashboards while storing over one trillion records, growing at 10 billion ingested every day. Powered by the same type of Timescale database we make available to all our customers.  

So even as you scale, you can continue to use the database you know, love, and trust—that is, the most popular database among all professional developers. You need not risk replatforming to a more unconventional and unfamiliar database engine. 

That’s what we mean when we say: with Timescale, start on Postgres, scale with Postgres, stay with Postgres.  

So, to all our customers and community: thank you for trusting us with your services, and join us as we wrap up by reviewing this whirlwind of launches. We’ll walk through all the capabilities Timescale launched to empower you to scale with ease, optimize your costs, and make your PostgreSQL experience even better. 

So, with that in mind—here’s the past three Launch Weeks. 🚀

Week 1: Cloud Week

Mature Cloud Platform

Blog: Refining a Mature Cloud Platform: Cloud Week at Timescale

Building mature, production-ready databases is not easy. Before announcing any new features, we kicked off our first launch week by recapping the work we’ve done in the past years to make Timescale ready for the most demanding mission-critical workloads—ensuring high availability and uptime, data reliability via automatic backup and snapshots, rapid recovery strategies leveraging a decoupled compute and storage architecture, robust security, compliance with data regulations, 24x7 global support coverage, and much more. 

Migration Tooling

Blog: Migrating a Terabyte-Scale PostgreSQL Database to Timescale With (Almost) Zero Downtime

Being a mature cloud platform implies dealing with production databases’ migrations. In Timescale, we experience first-hand how hard these are on a daily basis and how important it is for the company migrating to reduce downtime as much as possible. That’s why we developed Live Migrations—a well-tested, (almost) zero downtime migration strategy. It solves painful database migrations, offering an effective, secure, and straightforward method for migrating terabyte-scale PostgreSQL databases to Timescale. 

Enterprise Tier

Blog: New Timescale Enterprise Tier: A Solution for Mature Applications

Web page: https://www.timescale.com/enterprise

Tailored for large or demanding enterprises, this tier introduces enhanced features to meet the unique demands of these organizations. Timescale’s Enterprise Tier brings new security and compliance features, additional reliability and disaster recovery features, such as 14-day point-in-time recovery, stricter financially backed SLAs, increased consultative services such as compression and migration assistance, dedicated production-level 24x7 support, and additional ways to procure and contract Timescale services. The Enterprise Tier guarantees secure, reliable, and expert-supported database services for those businesses with the strictest requirements.

Insights

Blog: Database Monitoring and Query Optimization: Introducing Insights on Timescale

Blog - Technical Deep Dive: How We Scaled PostgreSQL to 350 TB+ (With 10B New Records/Day)

Insights is now available to all users of the Timescale platform, a powerful tool designed to enhance database monitoring and query optimization for PostgreSQL. It allows you to investigate performance issues with an unparalleled level of granularity, offering detailed statistics on timing, latency, and memory usage for individual queries—it’s already a customer favorite for diagnosing and solving database challenges!  

And in our technical deep dive, you can also read how we built Insights by dogfooding our own product: Insights is powered by a standard Timescale database service ingesting 10+ billions of records per day.

Connection Pooling

Blog: Connection Pooling on Timescale, or Why PgBouncer Rocks

Docs: Connection Pooling

Timescale offers its own connection pooler in General Availability, built on pgBouncer. It’s well-known that PostgreSQL prefers fewer, long-lived connections—creating new connections can be resource-intensive. Connection pooling is an essential tool for solving this problem: the pooler will efficiently manage your short-lived connections, reducing the strain on your PostgreSQL database and increasing performance in your serverless, web, and IoT applications.

User-Initiated Point-in-Time Recovery (PITR)

Docs: Point-In-Time Recovery 

Recover with confidence using user-initiated point-in-time recovery.  While Timescale internally maintained continuous incremental backups since launch, we’ve now enabled users to initiate a recovery fork on demand themselves. All customers can fork their service to any point in the last 72 hours, while Enterprise Tier customers can go back 14 days. The original service stays untouched to avoid losing data created since the time of recovery. Undo the unwanted, rewind effortlessly with Timescale! 

Week 2: Dynamic Infrastructure Week

Usage-Based Storage

Blog: Navigating a Usage-Based Model for PostgreSQL: Tips to Reduce Your Database Size

Most managed database services, such as AWS RDS, charge by the amount of disk you provision for your database and make it difficult to downscale. A few months ago, Timescale announced “usage-based storage pricing” for its time-series services, so you only pay for what you use.

Terraform Provider

Blog: Create Timescale Services With the Timescale Terraform Provider

The Timescale Terraform Provider allows developers to declaratively describe their desired cloud resource configuration, and this “Infrastructure as Code” approach ensures that the running services mimic this config. With this release of the Timescale Terraform Provider to 1.0 and General Availability, users of Timescale can more seamlessly deploy and manage the full lifecycle of their cloud database infrastructure.

More Regions on Cloud

Docs: Available Regions

Timescale added support for additional regions earlier this quarter, with Ohio and Singapore being the latest additions. Deploy your databases throughout the world.

Cloudflare Partnership

Blog: Timescale x Cloudflare: Time Series From the Edge

We partnered with Cloudflare to leverage its Hyperdrive product, transforming regional databases into globally distributed ones (thanks to Cloudflare’s edge caching and connection pooling). Employing our time-series database services, you can easily deploy serverless edge apps using Cloudflare Workers and Timescale, offering speed, scalability, and efficient data processing for remote Workers querying and ingesting data.

PostgreSQL Extensions

Blog: Top 8 PostgreSQL Extensions You Should Know About

Explore tools like PostGIS for spatial data handling, pg_stat_statements for query statistics, and pgcrypto for cryptographic operations. Uncover the utility of pgvector for vector operations, hstore for key-value storage, and pgpcre for advanced regular expressions. And, of course, TimescaleDB, which scales PostgreSQL for time-series and analytical workloads. Learn how to easily manage your extensions via our cloud UI (and request new ones!).

AI Bonus: Creating and Storing Embeddings

Blog: A Complete Guide to Creating and Storing Embeddings for PostgreSQL Data

Vector embeddings, serving as mathematical representations of data, offer benefits in semantic search, recommendation systems, and generative AI, such as Retrieval Augmented Generation (RAG). The post introduces PgVectorizer, a Timescale library we developed to make managing embeddings simple. PgVectorizer both creates embedding from your data in PostgreSQL, and it keeps your relational and embedding data in sync as your data changes.

Week 3: Scaling Postgres Week 

Tiered Storage

Blog: Scaling PostgreSQL for Cheap: Introducing Tiered Storage in Timescale

Tiered Storage, now in General Availability, introduces a multi-tiered storage architecture engineered to enable infinite, low-cost scalability for your time-series and analytical databases in the Timescale platform. You can now store your older, infrequently accessed data in a low-cost storage tier while still being able to transparently query it without ever sacrificing performance for your frequently accessed data. 

Because our low-cost storage tier offers flat-rate pricing—$0.021 per GB/month for data, cheaper than Amazon S3—developers can scale affordably, paying only for what they stored. And replicas and forks become even more cost effective, as the same tiered data can be accessed by each service without additional replication. With Tiered Storage, you can scale PostgreSQL to hundreds of TB of data.

Timescale offers a full storage lifecycle with its Tiered Storage, from row- and columnar format in high-performance storage to low-cost bottomless storage for less frequently accessed data.

Columnar Compression

Blog: Building Columnar Compression for Large PostgreSQL Databases

Timescale scales workloads through its hybrid row-column store. Very recent data is kept in row format to enable high ingest and fast point queries, while slightly older data is converted to compressed columnar storage to dramatically reduce storage overhead and boost query performance.  Timescale's compression enables over 95 % compression rates, significantly reducing the size of large PostgreSQL tables and leading to significant cost savings. 

We deep dive into its technical architecture, as well as improvements made to Timescale’s columnar compression over the past several years, including support for easily and efficiently modifying both its schema and data.

Vectorized Query Execution

Blog: Teaching Postgres New Tricks: SIMD Vectorization for Faster Analytical Queries

More than a year in the works, the latest releases of TimescaleDB have introduced vectorized query execution for our columnar data in PostgreSQL. Employing SIMD instructions, we’ve various stages of the execution pipeline, including vectorized decompression, filters, expressions, and aggregations. This enhancement aims to make analytical queries even an order of magnitude faster, enabling PostgreSQL to offer the best of both analytical and transactional capabilities.

Performance Improvements

Blog: 8 Performance Improvements in Recent TimescaleDB Releases for Faster Query Analytics

Timescale accelerates PostgreSQL for demanding workloads by introducing major capabilities like continuous aggregates, compressed columnar storage, and the newly revealed vectorized query execution.

However, the focus on continual improvement, or “kaizen for databases,” brings a continuous set of improvements with each software release. From partial aggregates at the chunk level to optimized chunk exclusion and lighter locks during continuous aggregate refresh procedures, these improvements matter, adding up to a faster database and better developer experience.

AI Bonus: LangChain Templates

GitHub:

The Timescale Vector team shared two reference architectures to help developers build a production-ready Retrieval Augmented Generation (RAG) application more easily using LangChain and PostgreSQL as your vector database. The first template allows you to use Timescale Vector for temporal RAG with self-querying, while the second template enables conversational retrieval, one of the most popular large language model (LLM) use cases.

Conclusion

It’s been an exciting three weeks of back-to-back launches here at Timescale. We’ve already received extremely positive feedback from users, and we hope you find them similarly powerful.

Powerful in a way that enables you to scale and stay on your PostgreSQL journey with Timescale: from time series and analytics to AI vector embeddings, and now for all PostgreSQL workloads.

And if it’s just a start for you—you can give Timescale a whirl and create a free account today.

Let’s go!
🚀✨