Verified by Garnet Grid

PostgreSQL Performance Tuning: From Slow to Sub-Second

Optimize PostgreSQL for production workloads. Covers EXPLAIN ANALYZE, indexing strategies, configuration tuning, connection pooling, and vacuum management.

PostgreSQL can serve queries in under 1ms — or over 60 seconds. The difference is indexing, configuration, and query design. This guide covers the highest-impact optimizations.


Step 1: Diagnose with EXPLAIN ANALYZE

-- Always use EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) for real execution stats
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT o.order_id, c.name, SUM(oi.quantity * oi.unit_price) AS total
FROM orders o
JOIN customers c ON o.customer_id = c.id
JOIN order_items oi ON o.order_id = oi.order_id
WHERE o.order_date >= '2025-01-01'
GROUP BY o.order_id, c.name
ORDER BY total DESC
LIMIT 100;

What to Look For

IndicatorProblemFix
Seq Scan on large tableMissing indexCreate index on filter/join columns
Nested Loop on large setsPoor join strategyCheck statistics, increase work_mem
Sort with high rowsMissing index for ORDER BYCreate index matching sort order
High Buffers: shared readData not cachedIncrease shared_buffers
Rows Removed by Filter: 999000Index not selectiveCreate more specific index

Step 2: Indexing Strategy

-- B-tree index for equality and range queries
CREATE INDEX idx_orders_date ON orders (order_date);

-- Composite index (order matters — most selective first)
CREATE INDEX idx_orders_customer_date
ON orders (customer_id, order_date DESC);

-- Partial index (only index what you query)
CREATE INDEX idx_orders_active
ON orders (customer_id, order_date)
WHERE status = 'active';

-- Covering index (avoid table lookup)
CREATE INDEX idx_orders_covering
ON orders (customer_id, order_date)
INCLUDE (total_amount, status);

-- GIN index for full-text search
CREATE INDEX idx_products_search
ON products USING gin(to_tsvector('english', name || ' ' || description));

-- Expression index
CREATE INDEX idx_users_email_lower
ON users (LOWER(email));

Index Health Check

-- Find unused indexes
SELECT
    schemaname || '.' || relname AS table,
    indexrelname AS index,
    idx_scan AS times_used,
    pg_size_pretty(pg_relation_size(indexrelid)) AS size
FROM pg_stat_user_indexes
WHERE idx_scan = 0
  AND indexrelname NOT LIKE '%_pkey'
ORDER BY pg_relation_size(indexrelid) DESC;

-- Find missing indexes (sequential scans on large tables)
SELECT
    schemaname || '.' || relname AS table,
    seq_scan,
    seq_tup_read,
    idx_scan,
    pg_size_pretty(pg_relation_size(relid)) AS size
FROM pg_stat_user_tables
WHERE seq_scan > 100
  AND pg_relation_size(relid) > 10485760  -- > 10 MB
ORDER BY seq_tup_read DESC;

Step 3: Configuration Tuning

# postgresql.conf — for a 16 GB RAM, 8 CPU server

# Memory
shared_buffers = 4GB              # 25% of RAM
effective_cache_size = 12GB       # 75% of RAM
work_mem = 64MB                   # Per-sort/hash operation
maintenance_work_mem = 1GB        # For VACUUM, CREATE INDEX

# WAL
wal_buffers = 64MB
checkpoint_completion_target = 0.9
max_wal_size = 4GB

# Query Planner
random_page_cost = 1.1            # SSD storage (default 4.0 is for HDD)
effective_io_concurrency = 200    # SSD
default_statistics_target = 200   # More accurate query plans

# Connections
max_connections = 200             # Use pgbouncer to multiplex

# Parallel Queries
max_parallel_workers_per_gather = 4
max_parallel_workers = 8

Step 4: Connection Pooling

# PgBouncer configuration (pgbouncer.ini)
[databases]
mydb = host=127.0.0.1 port=5432 dbname=mydb

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = md5
pool_mode = transaction         # Best for web apps
default_pool_size = 20
max_client_conn = 1000
min_pool_size = 5
reserve_pool_size = 5

Step 5: VACUUM and Maintenance

-- Check tables needing vacuum
SELECT
    schemaname || '.' || relname AS table,
    n_dead_tup,
    n_live_tup,
    ROUND(n_dead_tup::numeric / NULLIF(n_live_tup, 0) * 100, 1) AS dead_pct,
    last_autovacuum,
    last_autoanalyze
FROM pg_stat_user_tables
WHERE n_dead_tup > 10000
ORDER BY n_dead_tup DESC;

-- Tune autovacuum for high-write tables
ALTER TABLE orders SET (
    autovacuum_vacuum_threshold = 1000,
    autovacuum_vacuum_scale_factor = 0.05,
    autovacuum_analyze_threshold = 500,
    autovacuum_analyze_scale_factor = 0.02
);

Performance Checklist

  • Slow queries identified with pg_stat_statements
  • EXPLAIN ANALYZE run on top 10 slowest queries
  • Indexes created for high-frequency WHERE/JOIN columns
  • Unused indexes removed
  • shared_buffers set to 25% of RAM
  • random_page_cost set to 1.1 for SSD
  • Connection pooler (PgBouncer) deployed
  • Autovacuum tuned for high-write tables
  • pg_stat_statements enabled for query monitoring
  • Regular ANALYZE runs scheduled

:::note[Source] This guide is derived from operational intelligence at Garnet Grid Consulting. For database optimization, visit garnetgrid.com. :::