Connection Issue: "Too Many Clients Already" Error While Inserting Rows

Hello Team,

I am experiencing an issue with my TimescaleDB setup while trying to store time-series data. When my application attempts to write rows to the database, it intermittently fails with the following error:

Npgsql.PostgresException (0x80004005): 53300: sorry, too many clients already
at Npgsql.Internal.NpgsqlConnector.Open(NpgsqlTimeout timeout, Boolean async, CancellationToken cancellationToken)
at Npgsql.ConnectorPool.OpenNewConnector(NpgsqlConnection conn, NpgsqlTimeout timeout, Boolean async, CancellationToken cancellationToken)
at Npgsql.ConnectorPool.g__RentAsync|28_0(NpgsqlConnection conn, NpgsqlTimeout timeout, Boolean async, CancellationToken cancellationToken)
at Npgsql.NpgsqlConnection.g__OpenAsync|45_0(Boolean async, CancellationToken cancellationToken)

Environment:

TimescaleDB version: 2.11.2
TimescaleDB version: 15.7
max_connections : 100

The application uses a connection pool provided by Npgsql.
Connection pooling is enabled with default settings.

Observations:

I checked the pg_stat_activity table and found only three active connections at the time of the error.
This suggests the error might not directly result from reaching the max_connections limit.

I am considering whether this could be related to issues with the connection pool or shared buffers.

Questions:
Could this error be caused by mismanagement of the connection pool or shared buffer settings in PostgreSQL?
Are there any metrics or diagnostic queries I can run to confirm whether shared buffers or connection pool exhaustion is contributing to the problem?
Any guidance or recommendations would be greatly appreciated!

Thank you in advance for your help.

@jonatasdp : The unusual part is that the query consistently shows the number of active clients as 3, and including idle connections, it totals 13.

SELECT COUNT(*) AS active_connections  
FROM pg_stat_activity;

I initially suspected this might be related to system shared buffers or PostgreSQL shared buffers, but I’m not certain if my observation is accurate.

Thanks for reaching out about this connection pooling issue! This is a classic case where the error message can be a bit misleading. Let’s break this down systematically and get to the bottom of it.

First, let’s confirm your active connections with a more detailed query:

SELECT state, count(*) 
FROM pg_stat_activity 
GROUP BY state;

SELECT datname, usename, state, wait_event_type, wait_event 
FROM pg_stat_activity 
WHERE state IS NOT NULL;

The “too many clients” error despite only seeing three active connections in pg_stat_activity usually points to one of these scenarios:

  1. Connection Pool Lifecycle Issues:
  • Connections might not be properly released back to the pool
  • The pool might be configured with a size that’s too small for your workload

Let’s check your Npgsql connection string settings. You mentioned you’re using defaults, which means:

// Default Npgsql connection string
"Host=localhost;Database=mydb;Username=myuser;Password=mypass;Maximum Pool Size=100;Minimum Pool Size=1"

To better manage this, I’d recommend explicitly setting these parameters:

"Host=localhost;Database=mydb;Username=myuser;Password=mypass;Maximum Pool Size=20;Minimum Pool Size=1;Connection Idle Lifetime=300;Connection Pruning Interval=10"
  1. Diagnostic Queries
    Here’s a query to monitor connection pool usage:
SELECT 
    now() - backend_start as connection_age,
    now() - xact_start as transaction_age,
    usename,
    application_name,
    state,
    query
FROM pg_stat_activity 
WHERE datname = current_database()
ORDER BY connection_age DESC;

To track connection churn:

SELECT count(*), usename, application_name 
FROM pg_stat_activity 
GROUP BY usename, application_name;

Recommendations:

  1. Implement connection pool monitoring in your application:
using (var conn = await dataSource.OpenConnectionAsync())
{
    // Log pool statistics
    var poolStats = dataSource.Statistics;
    logger.LogInformation($"Total: {poolStats.Total}, Idle: {poolStats.Idle}, Busy: {poolStats.Busy}");
}
  1. Consider implementing a circuit breaker pattern for database connections:
using Polly;
var retryPolicy = Policy
    .Handle<NpgsqlException>()
    .WaitAndRetryAsync(3, retryAttempt => 
        TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)));
  1. Review your transaction management - ensure you’re properly disposing connections:
await using (var conn = await dataSource.OpenConnectionAsync())
await using (var tx = await conn.BeginTransactionAsync())
{
    // Your database operations goes here
    await tx.CommitAsync(); // commit the transaction after insert/delete/update
}

With the queries output you can understand if it’s a connection pool configuration issue or if there might be some connection leaks in the application code.

Also, for time-series workloads in TimescaleDB, we typically recommend starting with a smaller connection pool, you can reduce from 100 connections to 20 connections and scaling up based on metrics, rather than defaulting to larger pools. This helps prevent connection thrashing while still maintaining good performance for most time-series write patterns.