Open-source PostgreSQL stack for AI applications

Build RAG, search, and AI agents on the cloud and with PostgreSQL and purpose-built extensions for AI: pgvector, pgvectorscale, and pgai.

Hero illustration

Purpose-built performance for AI with pgvectorscale

Lower latency and higher query throughput—all with full SQL.

Performance graph

A simple stack for AI applications

With one database for your application's metadata, vector embeddings, and time-series data, you can say goodbye to the operational complexity of data duplication, synchronization, and keeping track of updates across multiple systems.

Lower latency search. Happier end users.

Compared to Pinecone’s storage optimized index (s1), PostgreSQL with pgvector and pgvectorscale achieves 28x lower p95 latency and 16x higher query throughput for approximate nearest neighbor queries at 99% recall— all at 75% lower monthly cost.

Read more

A vector database with full SQL

Write full SQL relational queries incorporating vector embeddings, complete with WHERE clauses, ORDER BY, and other PostgreSQL features. Leverage all PostgreSQL data types to store and filter richer metadata. Easily JOIN vector search results with relevant user metadata for more contextually relevant responses.

Having embedding functions directly within the database is a huge bonus. Previously, updating our saved embeddings was a tedious task, but now, with everything integrated, it promises to be much simpler and more efficient.

Having embedding functions directly within the database is a huge bonus. Previously, updating our saved embeddings was a tedious task, but now, with everything integrated, it promises to be much simpler and more efficient.

Web Begole

CTO at MarketReader

Pgvectorscale and pgai are great additions to the PostgreSQL AI ecosystem. The introduction of Statistical Binary Quantization promises lightning performance for vector search and will be valuable as we scale our vector workload.

Pgvectorscale and pgai are great additions to the PostgreSQL AI ecosystem. The introduction of Statistical Binary Quantization promises lightning performance for vector search and will be valuable as we scale our vector workload.

John McBride

Head of Infrastructure at OpenSauced

The simplicity and scalability of Timescale’s integrated approach to use Postgres as a vector database allows us to bring an AI product to market much faster.

The simplicity and scalability of Timescale’s integrated approach to use Postgres as a vector database allows us to bring an AI product to market much faster.

Nicolas Bream

CEO of PolyPerception

Scale from POC to Production

One platform for your AI application

Timescale’s enhanced PostgreSQL data platform is the home for your application's vector, relational and time-series data.

Flexible and transparent pricing

No “pay per query” or “pay per index”. Decoupled compute and storage for flexible resource scaling as you grow. Usage-based storage and dynamic compute (coming soon), so you pay only for what you actually use.

Ready to scale from day one

Push to prod with the confidence of automatic backups, failover and High Availability. Use read replicas to scale query load. One-click database forking for testing new embedding and LLM models. Consultative support to guide you as you grow at no extra cost.

Enterprise-grade security and data privacy

SOC2 Type II and GDPR compliance. Data encryption at rest and in motion. VPC peering for your Amazon VPC. Secure backups. Multi-factor authentication.

Works with everything in your AI stack

Access your PostgreSQL database any way you want. Go with a Python client, integrations in your favorite LLM frameworks, or through PostgreSQL libraries, ORMs, connectors, and tools.

Seamless integration with Open AI, LLama, ANthropic, Cohere, Hugging Face, LangChain, Llama Index, Vercel, Chainlit, gradio, Streamlit, Modal, Python, Typescript...

Resources