PostGISpg_stat_statementspg_trgmTimescaleDBCitusFDWConfig TuningMonitoring

Extensions & Performance Tuning

PostgreSQL's superpower is its extension ecosystem. Plus the configuration and monitoring patterns that matter in production.

40 min read9 sections
01

pg_stat_statements

The single most important extension for performance monitoring.pg_stat_statements tracks execution statistics for all SQL statements — total time, call count, rows returned, buffer usage. It's how you find your slow queries without guessing.

pg_stat_statements queriessql
-- Enable the extension
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Top 10 queries by total time (the biggest optimization targets)
SELECT
  substring(query, 1, 80) AS query,
  calls,
  round(total_exec_time::numeric, 1) AS total_ms,
  round(mean_exec_time::numeric, 1) AS mean_ms,
  rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;

-- Top queries by call count (high-frequency queries)
SELECT
  substring(query, 1, 80) AS query,
  calls,
  round(mean_exec_time::numeric, 2) AS mean_ms
FROM pg_stat_statements
ORDER BY calls DESC
LIMIT 10;

-- Queries with worst buffer hit ratio (I/O bound)
SELECT
  substring(query, 1, 80) AS query,
  calls,
  shared_blks_hit,
  shared_blks_read,
  round(shared_blks_hit * 100.0 / nullif(shared_blks_hit + shared_blks_read, 0), 1) AS hit_pct
FROM pg_stat_statements
WHERE shared_blks_hit + shared_blks_read > 100
ORDER BY hit_pct ASC
LIMIT 10;

-- Reset statistics (do periodically to see recent patterns)
SELECT pg_stat_statements_reset();

The Performance Workflow

(1) Install pg_stat_statements. (2) Let it collect data for a day. (3) Sort by total_exec_time — these are your biggest optimization targets. (4) Take the top query, run EXPLAIN ANALYZE on it. (5) Add missing indexes or rewrite the query. (6) Repeat. This systematic approach beats random guessing every time.

02

PostGIS

PostGIS turns PostgreSQL into a full geographic information system. It adds geometry/geography types, spatial indexes (GiST), and hundreds of ST_ functions. It's the gold standard for geospatial data in a relational database.

TypeCoordinatesDistance CalculationUse When
geometryFlat plane (projected)Euclidean (fast, approximate)Small areas, city-level, when speed matters
geographySphere (lat/lon)Great circle (accurate)Global data, cross-continent distances
PostGIS examplessql
CREATE EXTENSION IF NOT EXISTS postgis;

-- Table with geographic point
CREATE TABLE restaurants (
  id serial PRIMARY KEY,
  name text NOT NULL,
  location geography(POINT, 4326) NOT NULL
);

-- Spatial index
CREATE INDEX idx_restaurants_location ON restaurants USING GIST (location);

-- Insert with lat/lon
INSERT INTO restaurants (name, location)
VALUES ('Pizza Place', ST_MakePoint(-73.9857, 40.7484)::geography);

-- Find restaurants within 2km of a point
SELECT name, ST_Distance(location, ST_MakePoint(-73.99, 40.75)::geography) AS dist_m
FROM restaurants
WHERE ST_DWithin(location, ST_MakePoint(-73.99, 40.75)::geography, 2000)
ORDER BY dist_m;

-- Nearest neighbor (KNN) — uses GiST index efficiently
SELECT name, location <-> ST_MakePoint(-73.99, 40.75)::geography AS dist
FROM restaurants
ORDER BY location <-> ST_MakePoint(-73.99, 40.75)::geography
LIMIT 5;
03

pg_trgm & Full-Text Search

pg_trgm — Fuzzy Text Search

pg_trgm splits text into trigrams (3-character sequences) and uses them for similarity matching. It solves the "LIKE with leading wildcard" problem that B-tree indexes cannot handle.

pg_trgm for fuzzy searchsql
CREATE EXTENSION IF NOT EXISTS pg_trgm;

-- GIN trigram indexenables LIKE '%substring%' with index
CREATE INDEX idx_products_name_trgm ON products USING GIN (name gin_trgm_ops);

-- Now these use the index (impossible with B-tree):
SELECT * FROM products WHERE name LIKE '%wireless%';
SELECT * FROM products WHERE name ILIKE '%bluetooth%';

-- Similarity search (fuzzy matching)
SELECT name, similarity(name, 'postgre') AS sim
FROM products
WHERE similarity(name, 'postgre') > 0.3
ORDER BY sim DESC;

-- Trigram distance operator (for ORDER BY similarity)
SELECT name FROM products ORDER BY name <-> 'postgresql' LIMIT 5;

Built-in Full-Text Search

PostgreSQL FTSsql
-- Add tsvector column (generated)
ALTER TABLE articles ADD COLUMN search_vector tsvector
  GENERATED ALWAYS AS (
    setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
    setweight(to_tsvector('english', coalesce(body, '')), 'B')
  ) STORED;

-- GIN index on tsvector
CREATE INDEX idx_articles_fts ON articles USING GIN (search_vector);

-- Search with ranking
SELECT title,
  ts_rank(search_vector, query) AS rank,
  ts_headline('english', body, query, 'MaxFragments=2') AS snippet
FROM articles, to_tsquery('english', 'postgresql & performance') AS query
WHERE search_vector @@ query
ORDER BY rank DESC
LIMIT 10;
FeaturePostgreSQL FTSElasticsearch
Setup complexityZero — built into PostgreSQLSeparate cluster to manage
ScaleMillions of documentsBillions of documents
Relevance tuningBasic (ts_rank, weights)Advanced (BM25, function_score, boosting)
Fuzzy matchingLimited (pg_trgm helps)Built-in (edit distance, phonetic)
Real-time indexingImmediate (same transaction)Near real-time (~1 second refresh)
Best forModerate search needs without extra infraSearch as a primary feature at scale
04

TimescaleDB & Citus

TimescaleDB — Time-Series

TimescaleDB extends PostgreSQL for time-series workloads. It automatically partitions data by time (hypertables), provides continuous aggregates (incremental materialized views), and adds columnar compression for old data.

TimescaleDB Use Cases

  • Application metrics and monitoring data
  • Financial tick data with time-based queries
  • Any workload where data is append-mostly and queried by time range

Citus — Distributed PostgreSQL

Citus turns PostgreSQL into a distributed database by sharding tables across multiple nodes. It's for when a single PostgreSQL instance can't handle your write throughput or data volume.

ConceptDescription
Distributed TableSharded across worker nodes by a distribution column (e.g., tenant_id)
Reference TableSmall table replicated to all nodes (e.g., countries, config)
CoordinatorRoutes queries to appropriate shards, aggregates results
Co-locationTables sharded by same column are co-located — enables local joins

When to Consider Citus

Consider Citus when: single-node PostgreSQL can't handle your write throughput, your data exceeds what fits on one machine, or you need horizontal scaling for a multi-tenant SaaS (shard by tenant_id). Don't use Citus for: small databases, complex cross-shard joins, or when read replicas would solve your problem.

05

Other Essential Extensions

ExtensionPurposeKey Feature
pgcryptoCryptographic functionsgen_random_bytes(), crypt() for password hashing, pgp_sym_encrypt()
pg_partmanAutomated partition managementAuto-creates future partitions, manages retention
pgauditAudit loggingLogs DML/DDL operations for compliance
pg_repackOnline table compactionVACUUM FULL without exclusive lock
auto_explainAutomatic plan loggingLogs EXPLAIN for queries exceeding a threshold
postgres_fdwForeign Data WrapperQuery remote PostgreSQL instances as local tables
uuid-osspUUID generationuuid_generate_v4() (superseded by gen_random_uuid() in PG 13+)
hstoreKey-value pairs in a columnPredates jsonb — mostly replaced by jsonb now
Useful extension examplessql
-- auto_explain: log plans for slow queries automatically
-- In postgresql.conf:
-- shared_preload_libraries = 'auto_explain'
-- auto_explain.log_min_duration = '100ms'
-- auto_explain.log_analyze = true

-- pg_repack: online table compaction (no exclusive lock)
-- pg_repack --table=bloated_table --dbname=mydb

-- postgres_fdw: query remote database
CREATE EXTENSION postgres_fdw;
CREATE SERVER remote_db FOREIGN DATA WRAPPER postgres_fdw
  OPTIONS (host 'remote-host', dbname 'analytics');
CREATE FOREIGN TABLE remote_events (
  id bigint, type text, created_at timestamptz
) SERVER remote_db OPTIONS (table_name 'events');
-- Now query remote_events as if it were local
06

Essential Configuration

PostgreSQL ships with conservative defaults designed for a laptop. Production servers need tuning. These are the settings that matter most.

ParameterDefaultProduction (64 GB RAM, SSD)Why
shared_buffers128 MB16 GB (25% of RAM)PostgreSQL's internal page cache
effective_cache_size4 GB48 GB (75% of RAM)Planner hint — total expected cache
work_mem4 MB16-64 MBPer-operation sort/hash memory
maintenance_work_mem64 MB1 GBVACUUM and index build speed
max_connections100100-200 (use PgBouncer)Keep low — each is a process
random_page_cost4.01.1 (SSD)Tells planner random I/O is cheap on SSD
effective_io_concurrency1200 (SSD)Parallel I/O requests for bitmap scans
checkpoint_completion_target0.90.9Spread checkpoint I/O over 90% of interval
wal_buffers~3 MB64 MBWAL buffer before disk flush

random_page_cost on SSD

The default random_page_cost = 4.0 tells the planner that random I/O is 4× more expensive than sequential. This was true for spinning disks. On SSDs, random and sequential I/O are nearly equal. Set it to 1.1 on SSD — otherwise the planner over-favors sequential scans and under-uses indexes.

07

Query Optimization Patterns

Optimization Patterns

  • Use EXISTS instead of COUNT(*) > 0 — stops at first match
  • Use keyset pagination instead of OFFSET for deep pages
  • Avoid functions on indexed columns in WHERE (use expression indexes)
  • Prefer IN over multiple OR conditions
  • Select only needed columns — enables index-only scans
  • Use RETURNING to avoid separate SELECT after INSERT/UPDATE
  • Batch INSERTs with unnest() or COPY instead of row-by-row
Optimization patternssql
-- ❌ Slow: COUNT for existence check
SELECT CASE WHEN COUNT(*) > 0 THEN true END
FROM orders WHERE user_id = 42;

-- ✅ Fast: EXISTS stops at first match
SELECT EXISTS(SELECT 1 FROM orders WHERE user_id = 42);

-- ❌ Slow: function on indexed column
SELECT * FROM users WHERE EXTRACT(year FROM created_at) = 2024;

-- ✅ Fast: range query uses index directly
SELECT * FROM users
WHERE created_at >= '2024-01-01' AND created_at < '2025-01-01';

-- ❌ Slow: row-by-row inserts
INSERT INTO events (type, data) VALUES ('click', '{}');
INSERT INTO events (type, data) VALUES ('view', '{}');
-- ... 10,000 times

-- ✅ Fast: batch insert with unnest
INSERT INTO events (type, data)
SELECT * FROM unnest(
  ARRAY['click', 'view', 'scroll'],
  ARRAY['{}', '{}', '{}']::jsonb[]
);

-- ✅ Fastest: COPY for bulk loading
COPY events (type, data) FROM '/path/to/data.csv' WITH CSV;

Monitoring Queries

Production monitoring queriessql
-- Currently running queries (find long-running ones)
SELECT pid, now() - query_start AS duration, state, query
FROM pg_stat_activity
WHERE state = 'active' AND query_start < now() - interval '5 seconds'
ORDER BY duration DESC;

-- Table-level stats: sequential scans vs index scans
SELECT relname,
  seq_scan, idx_scan,
  round(idx_scan * 100.0 / nullif(seq_scan + idx_scan, 0), 1) AS idx_pct
FROM pg_stat_user_tables
WHERE seq_scan + idx_scan > 100
ORDER BY seq_scan DESC
LIMIT 10;

-- Lock contention
SELECT blocked.pid AS blocked_pid,
  blocked.query AS blocked_query,
  blocking.pid AS blocking_pid,
  blocking.query AS blocking_query
FROM pg_stat_activity blocked
JOIN pg_locks bl ON bl.pid = blocked.pid
JOIN pg_locks bk ON bk.locktype = bl.locktype
  AND bk.relation = bl.relation AND bk.pid != bl.pid
JOIN pg_stat_activity blocking ON blocking.pid = bk.pid
WHERE NOT bl.granted;
08

Interview Questions

Q:How would you find the slowest queries in a PostgreSQL database?

A: Install pg_stat_statements extension. Query it sorted by total_exec_time DESC to find queries consuming the most total time. Also check mean_exec_time for individually slow queries and calls for high-frequency queries. For each slow query, run EXPLAIN (ANALYZE, BUFFERS) to understand the execution plan and identify missing indexes or bad join strategies.

Q:When would you use PostgreSQL's built-in FTS vs Elasticsearch?

A: Use PostgreSQL FTS when: search is a secondary feature, you have < 10M documents, you want zero additional infrastructure, and basic relevance ranking suffices. Use Elasticsearch when: search is the primary feature, you need advanced relevance tuning (function_score, boosting), you have billions of documents, or you need features like fuzzy matching, synonyms, and complex aggregations at scale.

Q:What's the most important PostgreSQL configuration change for production?

A: shared_buffers from 128 MB to 25% of RAM. The default is absurdly low. Second: random_page_cost from 4.0 to 1.1 on SSD — otherwise the planner thinks random I/O is 4× more expensive than sequential (true for HDD, false for SSD) and avoids using indexes when it should. Third: add PgBouncer for connection pooling.

Q:How does pg_trgm solve the LIKE '%substring%' problem?

A: B-tree indexes only support prefix matching (LIKE 'prefix%'). pg_trgm splits text into 3-character sequences (trigrams) and builds a GIN index on them. When you query LIKE '%substring%', PostgreSQL looks up the trigrams of 'substring' in the GIN index to find candidate rows, then rechecks with the full LIKE condition. This turns an O(N) sequential scan into an indexed lookup.

09

Common Mistakes

📊

Not installing pg_stat_statements

Trying to optimize queries by guessing which ones are slow, or only looking at application-level metrics.

pg_stat_statements is the first extension you install on any production PostgreSQL. It tells you exactly which queries consume the most time, how often they run, and their buffer hit ratios. Without it, you're optimizing blind.

💾

Using default random_page_cost on SSD

Running on SSD with random_page_cost = 4.0 (the default) — planner avoids index scans because it thinks random I/O is expensive.

Set random_page_cost = 1.1 on SSD storage. This tells the planner that random I/O (index scans) is nearly as cheap as sequential I/O, so it will correctly prefer index scans for selective queries.

🗺️

Adding PostGIS when pg_trgm suffices

Installing PostGIS for simple 'find nearby' queries when all you have is lat/lon columns.

For simple distance calculations with a few thousand points, you can use the earthdistance extension (lighter) or even math on lat/lon columns. PostGIS is for complex spatial operations: polygons, intersections, routing, geofencing. Don't add the complexity unless you need it.

🧠

Not setting effective_cache_size

Leaving effective_cache_size at the default 4 GB on a 64 GB server — planner underestimates how much data is cached.

Set effective_cache_size to 75% of RAM. It's not an allocation — it's a hint telling the planner 'this much data is probably in cache (shared_buffers + OS page cache)'. This makes the planner more willing to use index scans on large tables.