}

Postgres query performance analysis and optimization

Created:

Introduction

Postgresql is an open-source Object-Relational DBMS supporting almost all SQL construct. In this tutorial we are going to try to to give tools to optimize queries. Everytime you need to improve the performance of a database, think on how are being used your affected table. Sometimes tables are mostly used for inserting and in other cases only for selecting data, but usually there is a bias you need to know in advance.

Query Planning

When postgres receives the query the first thing it does is to parse it and probably iw rewrites it, usually this is not a problem when thinking about performance. The next thing it does is to generate an execution plan, actually it does more than this but lets keep thing simple.

Queries for analysis

To know how postgres generates the plan we use the EXPLAIN query and we also use the ANALYZE. Here is the description taken from Postgresql.

  • EXPLAIN: Displays the execution plan that the PostgreSQL planner generates for the supplied statement.
  • ANALYZE: Collects statistics about the contents of tables in the database.

Preparing the test environment

We dont recomment to execute EXPLAIN ANALYZE queries on a production server. For this purpose we need to setup a testing postgres with the exactly the same database schema. With this schema we recommend to take random rows of the affected table and check the data distribution of each column. Having the random distribution of each column value will allow us to populate the testing table with more real data.

Now that we have out testing environment we need to configurate postgres to log slow queries. To do this search for the log_min_duration_statement key and set it to a positive value like 300 which corresponds to milliseconds.

log_min_duration_statement = 300

Now start using your app to know which queries are slow. we are going to analyze those queries in next steps.

Here we have an example of a very slow query:

2016-11-15 15:06:10 ART LOG:  duration: 300898.426 ms  statement: SELECT example_table.id AS example_table_id, example_table.identification AS example_table_identification, example_table.column1 AS example_table_column1, example_table.column2 AS example_table_column2, example_table.column4 AS example_table_column4, example_table.column3 AS example_table_column3, example_table.column5 AS example_table_column5, example_table."column6" AS "example_table_column6", example_table.psa AS example_table_psa, example_table.fingerprint AS example_table_fingerprint, example_table.duplicate AS example_table_duplicate
        FROM example_table
        WHERE example_table.column1 <= 444.53736 AND example_table.column1 >= 414.53736 AND example_table.column2 <= 4.0 AND example_table.column2 >= 4.0 AND example_table.column4 <= 1 AND example_table.column4 >= 1 AND example_table.column3 <= 6 AND example_table.column3 >= 6 AND example_table."column6" <= 4.7748 AND example_table."column6" >= 2.7748 AND example_table.column5 = 1

Postgres configuration tuning

If you are using postgres default configuration you should take a look to the file when starting to have performance issues, since the configuration was tought for the average use case. If you have a very big server chances are big that you need to change your configuration. We are not going to explain how to improve settings in this tutorial, but please check our tutorial on how to configure postgres for performance.

Check Vacuum before anything

Doing this in the testing enviroment is useless since vacuum will optimize internal database structure!. Check your production environment vacuum lasttime.

VACUUM is postgres a query which deletes deleted or obsoleted tuples.

First check the lastime that a vacuum was done with this query:

SELECT relname, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze
FROM pg_stat_all_tables
WHERE schemaname = 'public';

In my case the column last_vacuum showed that it was done several months ago. My use case is reading, since when the table has been populated no more inserts or updates are done. Just in case I did a vacuum:

VACUUM ANALYZE table_name;

You can configure auto vacuum, however keep in mind that the vacuum process could make you database slow while the VACUUM query is being executed. In our example we used 10000 rows as threshold, which means that the table is auto vacuum after 10000 inserts, updates, or deletes.

ALTER TABLE table_name
SET (autovacuum_vacuum_scale_factor = 0.0);

ALTER TABLE table_name
SET (autovacuum_vacuum_threshold = 10000);

ALTER TABLE table_name
SET (autovacuum_analyze_scale_factor = 0.0);

ALTER TABLE table_name
SET (autovacuum_vacuum_threshold = 10000);

If you can lock the table we recommend to do a VACUUM FULL with an agressive auto vacuum setting.

Query Analysis example

Our table schema is the following:

                                      Table "public.example_table"
         Column         |       Type        |                       Modifiers
------------------------+-------------------+-------------------------------------------------------
 id                     | integer           | not null default nextval('example_table_id_seq'::regclass)
 identification         | character varying |
 column1                | double precision  |
 column2                | integer           |
 column4                | integer           |
 column3                | integer           |
 column5                | integer           |
 column6                | double precision  |
 fingerprint            | character varying |
 duplicate              | boolean           |
Indexes:
    "example_table_pkey" PRIMARY KEY, btree (id)
    "example_table_identification_key" UNIQUE CONSTRAINT, btree (identification)
    "example_table_identification_idx" btree (identification)
    "multi_index" btree (column1, column2, column4, column3, "column6", column5)
Referenced by:
    TABLE "history" CONSTRAINT "history_id_fkey" FOREIGN KEY (decoy_id) REFERENCES example_table(id)
    TABLE "result" CONSTRAINT "result_id_fkey" FOREIGN KEY (decoy_id) REFERENCES example_table(id)


Using the slow queries in our log, we are going to use the explain analyze to diagnose the possible problems. Our table was populated with 100 Millon of rows using the same data distribution of the production environment.

EXPLAIN ANALYZE SELECT example_table.id AS example_table_id, example_table.identification AS example_table_identification, example_table.column1 AS example_table_column1, example_table.column2 AS example_table_column2, example_table.column4 AS example_table_column4, example_table.column3 AS example_table_column3, example_table.column5 AS example_table_column5, example_table."column6" AS "example_table_column6", example_table.psa AS example_table_psa, example_table.fingerprint AS example_table_fingerprint, example_table.duplicate AS example_table_duplicate

The following is the EXPLAIN ANALYZE result:

Index Scan using multi_index on example_table  (cost=0.57..664243.33 rows=98 width=212) (actual time=0.943..387378.003 rows=5685 loops=1)
   Index Cond: ((column1 <= 284.3018::double precision) AND (column1 >= 254.3018::double precision) AND (column4 <= 3) AND (column4 >= 3) AND (column3 <= 5) AND (column3 >= 5) AND ("column6" <= 2.4414::double precision) AND ("column6" >= 0.4414::double precision) AND (column5 = 0))
   Filter: (((column2)::numeric <= 5.0) AND ((column2)::numeric >= 5.0))
   Rows Removed by Filter: 27429
 Total runtime: 387403.832 ms
(5 rows)

Execute a count on the table

SELECT count(\*) from example_table;

As we can see postgres is using the multi_index as we expect. However the execution time was totally unacceptable. Since we have an index scan we also execute a VACUUM FULL. Since we use equality on column5 and column3, lets try to add a partial index using in those columns.

In particular column5 partitions our table in two, but column3 was not the case. we usually execute queries with other values, like 2 and 1.

CREATE INDEX column3_and_column5_idx ON example_table ON example_table where column5=0 and column3=1 or column3=2 or column3=5; CREATE INDEX column3_and_column5_idx ON example_table ON example_table where column5=1 and column3=1 or column3=2 or column3=5;

Now this is the output of explain

Index Scan using multi_index on example_table (cost=0.56..92997.23 rows=25 width=211) (actual time=1.023..231.494 rows=14626 loops=1)
   Index Cond: ((column1 <= 284.3018::double precision) AND (column1 >= 254.3018::double precision) AND (column4 <= 3) AND (column4 >= 3) AND (column3 <= 5) AND (column3 >= 5) AND ("column6" <= 2.4414::double precision) AND ("column6" >= 0.4414::double precision) AND (column5 = 0))
   Filter: (((column2)::numeric <= 5.0) AND ((column2)::numeric >= 5.0))
   Rows Removed by Filter: 62943
 Total runtime: 232.013 ms