Analyze queries with EXPLAIN
This section describes how to optimize queries using YSQL's EXPLAIN
and EXPLAIN ANALYZE
statements.
The EXPLAIN statement
Using the EXPLAIN
statement, you can obtain the query execution plan generated by YSQL for a given SQL statement. In addition to the plan, EXPLAIN
returns the following information:
- The so-called startup cost that represents the estimated query cost before the first row is returned.
- The total cost to run the query to completion.
You can use the EXPLAIN
statement in conjunction with SELECT
, DELETE
, INSERT
, REPLACE
, and UPDATE
statements. The EXPLAIN
statement has the following syntax:
EXPLAIN [ ( option [, ...] ) ] sql_statement;
The option and its values are described in the following table. The most important option is ANALYZE
.
Option | Value | Description |
---|---|---|
ANALYZE | boolean | Returns additional run-time statistics, such as time spent in each plan node, number of processed rows, the peak memory usage during a query execution, and so on, by executing sql_statement (as opposed to just creating the plan, as EXPLAIN without ANALYZE does). The output of sql_statement is discarded. The output includes the peak memory usage during the course of execution of the query. To perform analysis of any data-modifying statement (such as INSERT, UPDATE, and DELETE) without affecting the data, you must wrap EXPLAIN ANALYZE in a transaction using the following syntax: BEGIN; EXPLAIN ANALYZE sql_statement; ROLLBACK; |
VERBOSE | boolean | Displays detailed information about the query plan. The default value is FALSE. |
COSTS | boolean | Provides the estimated initial and total costs of each plan node. In addition, estimates the number of rows and the width of each row in the query plan. The default value is TRUE. |
BUFFERS | boolean | Provides information about the most input-output intensive parts of the query. The default value is FALSE. You can only use this option when ANALYZE is set to TRUE. |
TIMING | boolean | Provides information about the actual startup time and the time spent in each node of the output. The default value is TRUE. You can only use this option when ANALYZE is set to TRUE. |
SUMMARY | boolean | Provides additional information, such as the total time after the query plan. The value of this option is TRUE when ANALYZE is set to TRUE. |
FORMAT | { TEXT | XML | JSON | YAML } | Allows you to define the query plan output format. The default value is TEXT. |
Examples
Before you start
The examples will run on any YugabyteDB universe.
To create a universe, see Set up YugabyteDB universe.
Typically, you start by creating a table in YugabyteDB and inserting rows into it.
To create a table called employees
, execute the following:
yugabyte=# CREATE TABLE employees(k1 int, k2 int, v1 int, v2 text, PRIMARY KEY (k1, k2));
To insert table rows, execute the following:
yugabyte=# INSERT INTO employees (k1, k2, v1, v2)
VALUES (1, 2.0, 3, 'a'), (2, 3.0, 4, 'b'), (3, 4.0, 5, 'c');
To check the query plan for a basic select query, execute the following:
yugabyte=# EXPLAIN SELECT * FROM employees WHERE k1 = 1;
The following output displays the query execution cost estimate:
QUERY PLAN
------------------------------------------------------------------------------
Index Scan using sample_pkey on sample (cost=0.00..15.25 rows=100 width=44)
Index Cond: (k1 = 1)
(2 rows)
To check the execution plan for select with a complex condition that requires filtering, execute the following:
yugabyte=# EXPLAIN SELECT * FROM employees
WHERE k1 = 2 and floor(k2 + 1.5) = v1;
The following output displays the cost estimate based on the filtered result:
QUERY PLAN
------------------------------------------------------------------------------
Index Scan using sample_pkey on sample (cost=0.00..17.75 rows=100 width=44)
Index Cond: (k1 = 2)
Filter: (floor(((k2)::numeric + 1.5)) = (v1)::numeric)
(3 rows)
By enabling the ANALYZE
option and wrapping it to preserve data integrity, you can trigger the query execution, as follows:
BEGIN;
yugabyte=# EXPLAIN ANALYZE SELECT * FROM employees
WHERE k1 = 2 and floor(k2 + 1.5) = v1;
ROLLBACK;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Index Scan using sample_pkey on sample (cost=0.00..17.75 rows=100 width=44) (actual time=3.773..3.776 rows=1 loops=1)
Index Cond: (k1 = 2)
Filter: (floor(((k2)::numeric + 1.5)) = (v1)::numeric)
Planning Time: 0.145 ms
Execution Time: 4.840 ms
Peak Memory Usage: 8 kB
(6 rows)
In addition to the cost estimates from the query planner, EXPLAIN ANALYZE
displays the server output produced during the statement execution, as shown in the following example:
yugabyte=# EXPLAIN ANALYZE SELECT * FROM employees a LEFT JOIN LATERAL
(SELECT * FROM employees b WHERE a = b) c ON TRUE;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Merge Left Join (cost=299.66..379.66 rows=5000 width=88) (actual time=2.382..2.389 rows=3 loops=1)
Merge Cond: (a.* = b.*)
-> Sort (cost=149.83..152.33 rows=1000 width=112) (actual time=1.485..1.500 rows=3 loops=1)
Sort Key: a.*
Sort Method: quicksort Memory: 25kB
-> Seq Scan on employees a (cost=0.00..100.00 rows=1000 width=112) (actual time=0.407..0.410 rows=3 loops=1)
-> Sort (cost=149.83..152.33 rows=1000 width=112) (actual time=0.657..0.658 rows=3 loops=1)
Sort Key: b.*
Sort Method: quicksort Memory: 25kB
-> Seq Scan on employees b (cost=0.00..100.00 rows=1000 width=112) (actual time=0.278..0.280 rows=3 loops=1)
Planning Time: 30.850 ms
Execution Time: 3.943 ms
Peak Memory Usage: 137 kB
(13 rows)
The server output from the preceding example includes the number of re-scans (loops) each node performed, the number of milliseconds passed before the first row was returned, total time before the last tuple was returned by each execution node, and the number of tuples returned by each execution node.
EXPLAIN
, on the other hand, does not provide this additional information, as shown in the following examples:
yugabyte=# EXPLAIN SELECT * FROM employees a LEFT JOIN LATERAL
(SELECT * FROM employees b WHERE a = b) c ON TRUE;
QUERY PLAN
------------------------------------------------------------------------------
Merge Left Join (cost=299.66..379.66 rows=5000 width=88)
Merge Cond: (a.* = b.*)
-> Sort (cost=149.83..152.33 rows=1000 width=112)
Sort Key: a.*
-> Seq Scan on employees a (cost=0.00..100.00 rows=1000 width=112)
-> Sort (cost=149.83..152.33 rows=1000 width=112)
Sort Key: b.*
-> Seq Scan on employees b (cost=0.00..100.00 rows=1000 width=112)
(8 rows)
Real world example
The following example is drawn from a real-world scenario, using the EXPLAIN
statement to view query plans, and then optimizing those queries by adding indexes and adjusting tables.
Optimize SELECT COUNT
using an index
The following table is representative of the customer's data.
Table "public.contacts"
Column | Type | Collation | Nullable | Default
----------------+--------------------------------+-----------+----------+--------
id | bigint | | not null |
account_id | integer | | |
email | character varying | | |
first_name | character varying | | |
last_name | character varying | | |
address_line_1 | character varying | | |
address_line_2 | character varying | | |
address_city | character varying | | |
address_state | character varying | | |
address_postal | character varying | | |
created_at | timestamp(6) without time zone | | not null |
updated_at | timestamp(6) without time zone | | not null |
is_over_charged | boolean | | | false
is_paid | boolean | | | false
data_source | character varying | | |
Indexes:
"contacts_pkey" PRIMARY KEY, lsm (id HASH)
Running the following queries with EXPLAIN
output shows the query execution plan generated by YSQL for a given SQL statement.
yugabyte=# explain SELECT COUNT(*) FROM contacts WHERE contacts.account_id = 1234
AND contacts.is_paid = TRUE
AND contacts.is_over_charged = TRUE
AND (updated_at > '2021-04-12 12:00:00 ');
QUERY PLAN
----------------------------------------------
Aggregate (cost=107.50..107.51 rows=1 width=8)
Seq Scan on contacts (cost=0.00..105.00 rows=1000 width=0)
Filter: (is_paid AND is_over_charged AND (updated_at > '2021-04-12 12:00:00'::timestamp without time zone) AND (account_id = 1234))
(3 rows)
yugabyte=# explain SELECT COUNT(*) FROM contacts WHERE contacts.account_id = 5678
AND contacts.is_paid_for = TRUE
AND contacts.is_over_charged = TRUE
AND (updated_at > '2021-04-12 12:00:00');
QUERY PLAN
----------------------------------------------
Aggregate (cost=107.50..107.51 rows=1 width=8)
Seq Scan on contacts (cost=0.00..105.00 rows=1000 width=0)
Filter: (is_paid AND is_over_charged AND (updated_at > '2021-04-12 12:00:00'::timestamp without time zone) AND (account_id = 5678))
(3 rows)
yugabyte=# explain SELECT COUNT(*) FROM contacts WHERE contacts.account_id = 7890
AND contacts.is_paid = FALSE
AND (updated_at > '2021-04-12 12:00:00');
QUERY PLAN
----------------------------------------------
Aggregate (cost=107.50..107.51 rows=1 width=8)
Seq Scan on contacts (cost=0.00..105.00 rows=1000 width=0)
Filter: ((NOT is_paid) AND (updated_at > '2021-04-12 12:00:00'::timestamp without time zone) AND (account_id =1234))
(3 rows)
In each case, the queries do a Seq Scan
(sequential scan) on the tables. This operation requires scanning the entire table to retrieve the desired columns. Even using the partition keys to do the lookup, it still needs to do a lot of scanning.
Avoid SELECT COUNT(*)
queries in most cases, as they can require a full scan of the table to get the results. This can cause query degradation, and in some cases cause the query to not return at all.
Because most of the queries above use account_id
as the main qualifier, you can avoid a sequential scan by creating a direct index on that column, and then using the INCLUDE
feature to cover the other columns that you also want in the index. Indexing is a powerful tool that can speed up queries with higher latencies. When creating an index, consider the column cardinality, as well as the different index types.
Create the index as follows:
create index contacts_account_id on contacts (account_id hash, updated_at desc) include (is_paid, is_over_charged);
With the index in place, the queries now do an index rather than sequential scan to get the data, significantly improving performance.
yugabyte=# explain SELECT COUNT(*) FROM contacts WHERE contacts.account_id = 1234
AND contacts.is_paid = TRUE
AND contacts.is_over_charged = TRUE
AND (updated_at > '2021-04-12 12:00:00');
QUERY PLAN
------------------------------------------
Aggregate (cost=5.30..5.31 rows=1 width=8)
Index Scan using contacts_account_id on contacts (cost=0.00..5.28 rows=10 width=0)
Index Cond: (account_id = 1234)
Filter: (is_paid AND is_over_charged AND (updated_at > '2021-04-12 12:00:00'::timestamp without time zone))
(4 rows)
Time: 57.208 ms
Previous run time: 194 seconds
yugabyte=# explain SELECT COUNT(*) FROM contacts WHERE contacts.account_id = 5678
AND contacts.is_paid = TRUE
AND contacts.is_over_charged = TRUE
AND (updated_at > '2021-04-12 12:00:00');
QUERY PLAN
------------------------------------------
Aggregate (cost=5.30..5.31 rows=1 width=8)
Index Scan using contacts_account_id on contacts (cost=0.00..5.28 rows=10 width=0)
Index Cond: (account_id = 5678)
Filter: (is_paid AND is_over_charged AND (updated_at > '2021-04-12 12:00:00'::timestamp without time zone))
(4 rows)
Time: 11.923 ms
Previous run time 188 seconds
yugabyte=# explain SELECT COUNT(*) FROM contacts WHERE contacts.account_id = 7890
AND contacts.is_paid = FALSE
AND (updated_at > '2021-04-12 12:00:00');
QUERY PLAN
------------------------------------------
Aggregate (cost=5.30..5.31 rows=1 width=8)
Index Scan using contacts_account_id on contacts (cost=0.00..5.28 rows=10 width=0)
Index Cond: (account_id = 8060)
Filter: ((NOT is_paid) AND (updated_at > '2021-04-12 12:00:00'::timestamp without time zone))
(4 rows)
Time: 46.658 ms
Previous run time: 147 seconds
Optimize SELECT
by changing table sorting
The following query retrieves data from an account table where some indexes are already defined.
Table definition:
yugabyte=# \d accounts
Table "public.accounts"
Column | Type | Collation | Nullable | Default
--------------+-----------------------------+-----------+----------+--------
id | bigint | For | not null | nextval('accounts_id_seq'::regclass)
company_name | character varying(125) | | not null |
status | character varying(25) | | |
first_name | character varying(55) | | |
last_name | character varying(55) | | |
phone | character varying(25) | | |
created_at | timestamp without time zone | | not null |
updated_at | timestamp without time zone | | not null |
product_id | integer | | |
business_type | character varying(55) | | |
sales_rep | character varying | | |
Indexes:
"accounts_pkey" PRIMARY KEY, lsm (id HASH)
"index_on_company_name" lsm (company_name HASH)
EXPLAIN
output for the query is as follows:
yugabyte=# explain SELECT accounts.* FROM accounts
ORDER BY accounts.id desc, accounts.id desc LIMIT 25 OFFSET 0;
QUERY PLAN
----------------------------------------------
Limit (cost=128.22..128.28 rows=25 width=1642)
Sort (cost=128.22..130.72 rows=1000 width=1642)
Sort Key: id DESC
Seq Scan on accounts (cost=0.00..100.00 rows=1000 width=1642)
In this case, a sort is run first, which adds extra time to the query, before running a sequential scan of the table, which also degrades performance.
To optimize this query, adjust the sorting of the table by the primary key to be DESC
rather than HASH
.
CREATE TABLE public.accounts ( id bigint NOT NULL, ... ,PRIMARY KEY(id desc);
Now, the query no longer does a sequential scan; instead it uses an index scan, cutting execution time from 430ms to 3ms.
yugabyte=# explain SELECT accounts.* FROM accounts
ORDER BY accounts.id desc, accounts.id desc LIMIT 25 OFFSET 0;
QUERY PLAN
------------------------------------------
Limit (cost=0.00..2.85 rows=25 width=1642)
Index Scan using accounts_pkey on accounts (cost=0.00..114.00 rows=1000 width=1642)
Time: 2.994 ms
Previous run time: 426.627 ms
Optimize SELECT
using an index
The query in the following example runs a sequential scan, which you can fix by adding an index.
yugabyte=# \d account_type
Table "public.account_type"
Column | Type | Collation | Nullable | Default
------------+--------------------------------+-----------+----------+--------
id | bigint | | not null | nextval('account_domains_id_seq'::regclass)
account_id | integer | | For |
type | character varying(55) | | |
url | character varying | | |
is_valid | boolean | | |
created_at | timestamp(6) without time zone | | not null |
updated_at | timestamp(6) without time zone | | not null |
verified_at | timestamp without time zone | | |
Indexes:
"account_type_pkey" PRIMARY KEY, lsm (id HASH)
yugabyte=# explain SELECT account_domains.* FROM account_type
WHERE account_type.account_id = 1234 ORDER BY account_type.url ASC;
QUERY PLAN
----------------------------------------------
Sort (cost=152.33..154.83 rows=1000 width=237)
Sort Key: url
Seq Scan on account_type (cost=0.00..102.50 rows=1000 width=237)
Filter: (account_id = 6873)
The query runs a sequential scan on the account_type table, so adding an index on account_id
prevents the full scan of the table, as follows:
create index account_id on account_type(account_id);
With the new index, the query scans the index rather than the larger main table, significantly improving performance.
yugabyte=# explain SELECT account_type.* FROM account_type
WHERE account_type.account_id = 6873 ORDER BY account_type.url ASC;
QUERY PLAN
----------------------------------------
Sort (cost=5.39..5.42 rows=10 width=237)
Sort Key: url
Index Scan using account_id on account_type (cost=0.00..5.22 rows=10 width=237)
Index Cond: (account_id = 6873)
Time: 71.757 ms
Previous runtime: 460 ms
Learn more
- Refer to Get query statistics using pg_stat_statements to track planning and execution of all the SQL statements.
- Refer to View live queries with pg_stat_activity to analyze live queries.
- Refer to View COPY progress with pg_stat_progress_copy to track the COPY operation status.
- Refer to Optimize YSQL queries using pg_hint_plan show the query execution plan generated by YSQL.