Getting row counts from large PostgreSQL tables using COUNT(*) can be painfully slow—especially when you’re dealing with millions of rows. But there’s a clever trick using PostgreSQL’s internal statistics that can give you blazing fast results. Here’s how to get approximate counts in milliseconds instead of minutes.

Problem: COUNT(*) Is Killing Performance 🐌

When working with a large table that has 10M+ rows (let’s say it’s user_events for example), a simple command like:

SELECT COUNT(*) FROM user_events;

can take several seconds or even minutes because PostgreSQL has to scan every single row in the table. This becomes a major bottleneck for dashboards, analytics, and any user-facing features that need quick estimates.

The Fast Alternative: Leverage Table Statistics

PostgreSQL maintains internal statistics about your tables. We can tap into these for lightning-fast approximate counts:

SELECT (reltuples / relpages * (pg_relation_size(oid) / 8192))::bigint
FROM pg_class
WHERE oid = 'your_schema.your_table'::regclass;

Breaking Down the Magic Formula

  • pg_class: System catalog with table information
  • reltuples: Estimated number of rows (updated by VACUUM/ANALYZE)
  • relpages: Number of pages in the table
  • pg_relation_size(oid): Current table size in bytes
  • 8192: PostgreSQL’s default page size (8KB)

The formula calculates: (estimated_rows_per_page) × (current_total_pages)

Step 1: Create a Reusable Function

Instead of writing the complex query every time, wrap it in a function:

CREATE OR REPLACE FUNCTION fast_count(table_name text)
RETURNS bigint AS $$
BEGIN
    RETURN (
        SELECT (reltuples / relpages * (pg_relation_size(oid) / 8192))::bigint
        FROM pg_class
        WHERE oid = table_name::regclass
    );
END;
$$ LANGUAGE plpgsql;

Now you can simply call:

SELECT fast_count('user_events');

Step 2: Keep Statistics Fresh for Better Accuracy

For the best results, ensure your table statistics are up to date:

-- Update statistics for a specific table
ANALYZE user_events;

-- Enable automatic statistics updates
ALTER TABLE user_events SET (autovacuum_analyze_scale_factor = 0.1);

Step 3: Smart Hybrid Approach

Combine fast estimates with exact counts based on table size:

SELECT CASE
    WHEN pg_relation_size('user_events') > 100000000 -- 100MB threshold
    THEN fast_count('user_events')
    ELSE (SELECT COUNT(*) FROM user_events)
END as row_count;

Performance Results

In testing with a table containing 10 million rows:

  • COUNT(*): 8.2 seconds ⏰
  • Statistics query: 0.003 seconds ⚡
  • Accuracy: 95%+ in most cases

That’s a 2,700x performance improvement!

When to Use This Technique

Perfect for:

  • Dashboard metrics and KPIs
  • Pagination estimates
  • Monitoring systems
  • Analytics queries
  • Performance-sensitive applications

Avoid when:

  • Financial calculations requiring exact counts
  • Small tables (< 1000 rows)
  • Critical business logic
  • Tables with frequent DELETEs (stale stats)

Wrap-Up

By leveraging PostgreSQL’s internal statistics, you can:

  • Transform slow counting queries into lightning-fast operations
  • Reduce database load significantly
  • Improve user experience with instant results
  • Scale analytics and monitoring systems effectively

Row counting doesn’t have to be a performance bottleneck anymore.

Need help optimizing your PostgreSQL queries or building high-performance database applications? Reach out to the team at Datum Brain.