It is a frequent complaint that count(*)
is so slow on PostgreSQL.
In this article I want to explore the options you have get your result as fast as possible.
Why is count(*)
so slow?
Most people have no trouble understanding that the following is slow:
SELECT count(*) FROM /* complicated query */;
After all, it is a complicated query, and PostgreSQL has to calculate the result before it knows how many rows it will contain.
But many people are appalled if the following is slow:
SELECT count(*) FROM large_table;
Yet if you think again, the above still holds true: PostgreSQL has to calculate the result set before it can count it. Since there is no “magical row count” stored in a table (like it is in MySQL’s MyISAM), the only way to count the rows is to go through them.
So count(*)
will normally perform a sequential scan of the table, which can be quite expensive.
Is the “*
” in count(*)
the problem?
The “*
” in SELECT * FROM ...
is expanded to all columns. Consequently, many people think that using count(*)
is inefficient and should be written count(id)
or count(1)
instead. But the “*
” in count(*)
is quite different, it just means “row” and is not expanded at all.
Writing count(1)
is the same as count(*)
, but count(id)
is something different: It will only count the rows where id IS NOT NULL
, since most aggregates ignore NULL
values.
So there is nothing to be gained by avoiding the “*
”.
Using an index only scan
It is tempting to scan a small index rather then the whole table to count the number of rows.
However, this is not so simple in PostgreSQL because of its multi-version concurrency control strategy. Each row version (“tuple”) contains the information to which database snapshot it is visible. But this information is not (redundantly) stored in the indexes. So it usually isn’t enough to count the entries in an index, because PostgreSQL has to visit the table entry (“heap tuple”) to make sure an index entry is visible.
To mitigate this problem, PostgreSQL has introduced the visibility map, a data structure that stores if all tuples in a table block are visible to everybody or not.
If most table blocks are all-visible, an index scan doesn’t need to visit the heap tuple often to determine visibility. Such an index scan is called “index only scan”, and with that it is often faster to scan the index to count the rows.
Now it is VACUUM
that mantains the visibility map, so make sure that autovacuum runs often enough on the table if you want to use a small index to speed up count(*)
.
Using an aggregate table
I wrote above that PostgreSQL does not store the row count in the table.
Maintaining such a row count would be an overhead that every data modification has to pay for a benefit that no other query can reap. This would be a bad bargain. Moreover, since different queries can see different row versions, the counter would have to be versioned as well.
But there is nothing that keeps you from implementing such a row counter yourself.
Suppose you want to keep track of the number of rows in the table mytable
. You can do that as follows:
START TRANSACTION; CREATE TABLE mytable_count(c bigint); CREATE FUNCTION mytable_count() RETURNS trigger LANGUAGE plpgsql AS $$BEGIN IF TG_OP = 'INSERT' THEN UPDATE mytable_count SET c = c + 1; RETURN NEW; ELSIF TG_OP = 'DELETE' THEN UPDATE mytable_count SET c = c - 1; RETURN OLD; ELSE UPDATE mytable_count SET c = 0; RETURN NULL; END IF; END;$$; CREATE TRIGGER mytable_count_mod AFTER INSERT OR DELETE ON mytable FOR EACH ROW EXECUTE PROCEDURE mytable_count(); -- TRUNCATE triggers must be FOR EACH STATEMENT CREATE TRIGGER mytable_count_trunc AFTER TRUNCATE ON mytable FOR EACH STATEMENT EXECUTE PROCEDURE mytable_count(); -- initialize the counter table INSERT INTO mytable_count SELECT count(*) FROM mytable; COMMIT;
We do everything in a single transaction so that no data modifications by concurrent transactions can be “lost” due to race conditions.
This is guaranteed because CREATE TRIGGER
locks the table in SHARE ROW EXCLUSIVE
mode, which prevents all concurrent modifications.
The down side is of course that all concurrent data modifications have to wait until the SELECT count(*)
is done.
This provides us with a really fast alternative to count(*)
, but at the price of slowing down all data modifications on the table.
Even though this counter table might receive a lot of updates, there is no danger of “table bloat” because these will all be HOT updates.
Do you really need count(*)
Sometimes the best solution is to look for an alternative.
Often an approximation is good enough and you don’t need the exact count. In that case you can use the estimate that PostgreSQL uses for query planning:
SELECT reltuples::bigint FROM pg_catalog.pg_class WHERE relname = 'mytable';
This value is updated by both autovacuum and autoanalyze, so it should never be much more than 10% off. You can reduce autovacuum_analyze_scale_factor
for that table so that autoanalyze runs more often there.
Estimating query result counts
Up to now, we have investigated how to speed up counting the rows of a table.
But sometimes you want to know how many rows a SELECT
statement will return without actually running it.
Obviously the only way to get an exact answer to this is to execute the query. But if an estimate is good enough, you can use PostgreSQL’s optimizer to get it for you.
The following simple function uses dynamic SQL and EXPLAIN
to get the execution plan for the query passed as argument and returns the row count estimate:
CREATE FUNCTION row_estimator(query text) RETURNS bigint LANGUAGE plpgsql AS $$DECLARE plan jsonb; BEGIN EXECUTE 'EXPLAIN (FORMAT JSON) ' || query INTO plan; RETURN (plan->0->'Plan'->>'Plan Rows')::bigint; END;$$;
Do not use this function to process untrusted SQL statements, since it is by nature vulnerable to SQL injection.