PostgreSQL indexes can be the difference between a blazing-fast query and a frustrating performance bottleneck. But how can you tell if your indexes are helping—or hurting—your database?
In this post, we’ll explore practical strategies to evaluate and optimize your PostgreSQL indexes using tools like pg_stat_user_indexes
and pg_stat_statements
. We’ll also share best practices and how Timescale simplifies index management for time-series and partitioned workloads.
Looking to improve your PostgreSQL performance? Check out our performance tuning series, including our index-focused article.
Indexes in PostgreSQL aren’t "set and forget." As your data and query patterns evolve, your indexing strategy must evolve too. Outdated, underused, or excessive indexes can bring all sorts of issues:
Slowing down write operations
Wasting disk space
Misleading the query planner
Leading to suboptimal query plans
Let’s walk through how to spot problematic indexes and fix them before they impact your app.
To identify indexes that aren’t being used effectively, the pg_stat_user_indexes
view is your best friend. The idx_scan
column shows how often each index has been scanned.
Here’s a query to find low-usage indexes:
SELECT
relname AS table_name,
indexrelname AS index_name,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size,
idx_scan AS index_scan_count
FROM
pg_stat_user_indexes
WHERE
idx_scan < 50 -- Adjust based on your workload and retention window
ORDER BY
index_scan_count ASC,
pg_relation_size(indexrelid) DESC;
In the example above, we're looking for indexes with fewer than 50 scans (which is an arbitrary number for this example—you should adjust it based on your application's usage patterns and the duration since the last statistics reset.
Tip: Don't immediately drop indexes based on this query alone. Some may be used infrequently (e.g., by weekly reports) but are still critical.
If a query is reading a lot of data from disk, it might benefit from better indexing. That’s where pg_stat_statements
comes in. Two key metrics to watch:
shared_blks_read
: how many blocks PostgreSQL had to read from disk
blk_read_time
: time (in ms) spent reading those blocks
A high value in either can signal that the planner isn't using indexes efficiently.
Read how you can optimize array queries with PostgreSQL GIN indexes.
Suppose you run a daily report:
SELECT * FROM purchases WHERE purchase_date > NOW() - INTERVAL '1 day';
From pg_stat_statements
, you see:
Query | shared_blks_read | blk_read_time |
| 5,000 | 300 ms |
This suggests PostgreSQL is reading a lot of disk blocks—slowly. Upon inspection, you realize there’s no index on purchase_date
. Creating one might help:
CREATE INDEX idx_purchase_date ON purchases(purchase_date);
After creation and some usage, re-check pg_stat_statements
—you should see those values drop, confirming improved performance.
PostgreSQL’s planner relies on server parameters to decide whether (and how) to use indexes. Here’s a quick rundown of the most important ones:
work_mem
: It controls memory per operation (e.g., sort, hash). Too low, and operations spill to disk.
shared_buffers
: It caches table and index pages in RAM. A higher value reduces disk I/O.
Check out Part II of our series on PostgreSQL performance tuning for more detailed configuration recommendations for both of these parameters.
This doesn’t allocate memory—it estimates how much is available for caching. A higher value makes the planner more likely to use indexes.
🧠 Best Practice: Set it to 50–70 % of your system's total memory.
These parameters influence the planner’s cost estimates:
random_page_cost
: Default is 4 (ideal for spinning disks). On SSDs, set to ~1.1.
seq_page_cost
: Usually left at 1.0.
Reducing random_page_cost
makes index scans more attractive.
Determines the CPU cost of processing an index tuple (default: 0.005). Generally best left unchanged unless you're doing advanced tuning with benchmarking.
✅ Index only what you need Avoid over-indexing—it slows inserts/updates and increases storage.
✅ Avoid indexes on low-cardinality columns
Columns like gender
or booleans often don’t benefit from indexes—use partial or multicolumn indexes instead.
✅ Update statistics after changing indexes
Run ANALYZE
after adding or dropping indexes so the planner has accurate data.
✅ Rebuild indexes concurrently when needed
Use REINDEX CONCURRENTLY
to avoid locking tables during index rebuilds.
✅ Benchmark before and after
Use EXPLAIN ANALYZE
to confirm that indexes are actually improving query plans.
If you’re using Timescale, good news: much of this work is handled automatically.
Hypertables automatically create indexes on the time column.
CREATE INDEX applies to each chunk, optimizing for partitioned workloads.
Composite indexes (e.g., ON (time, device_id)
) are supported and encouraged for time-series queries.
Sparse data? Use partial indexes like WHERE column IS NOT NULL
for leaner indexes.
Data retention: Dropping old chunks (and their indexes) is fast and efficient.
You can inspect indexes visually in the Timescale Cloud console and tweak performance without getting lost in the weeds.
Curious how hypertables preserve your indexes? Check out this post for a deep dive.
PostgreSQL indexes are powerful—but only if used and maintained properly. With the right monitoring tools (pg_stat_user_indexes
, pg_stat_statements
), server tuning, and indexing strategy, you can dramatically improve query performance.
And with Timescale, many of these complexities are abstracted away—so you can focus on building, not babysitting your database. Learn more about how to improve PostgreSQL performance: