talk pgconf eu 2013

Download Talk PGConf Eu 2013

If you can't read please download the document

Upload: atri-sharma

Post on 04-Aug-2015

68 views

Category:

Documents


0 download

TRANSCRIPT

1. The PostgreSQL Query Planner Atri Sharma 2. Whats this talk based on? Research done by hackers like Robert Haas in the past, some mailing list readings, discussions with smart people etc... 3. Who's this talk for? Users of postgres, who want to know how query planner's heuristics can affect their queries. 4. A little about the PostgreSQL Query Planner Quoting Bruce Momjian: 'The optimizer is the "brain" of the database, interpreting SQL queries and determining the fastest method of execution.' 5. The decision maker The fate of your query performance is decided here. 6. Where can the problems arise in the query planner Row Count Estimates DBA Ideas Planner's Blocking points (What the planner cant do at this point of time) 7. Row Count Stats The backend essentially holds MCVs (Most common values) and a histogram of the distribution of the rest of the values. The number of MCVs stored is equal to a value known as statistics target. Incorrect or outdated counts can lead to bad selectivity estimates, leading to not so good plans 8. User Settings PostgreSQL depends quite some on the page values (seq_page_cost, random_page_cost) and others. Setting them to random values will cause the query planner to choose an inefficient plan over an efficient alternate plan. 9. Planner Not-Done-Yets Some queries are slow in execution because their optimization depends on a feature which is not yet present in the query planner. 10. Selectivity Estimate Issues Arbitrary estimates give a bad value Cross table correlations arent taken into consideration. Planner doesnt understand expressions like (a + 0 = a) 11. Settings Issues DBAs can use some settings to tune the query planner. Those settings can be harmful if set to incorrect values. 12. Planner Cost Evaluation Parameters seq_page_cost = 1.0 random_page_cost = 4.0 cpu_tuple_cost = 0.01 cpu_index_tuple_cost = 0.005 cpu_operator_cost = 0.0025 These parameters are used by the planner to estimate the costs of plans. 13. random_page_cost abstracts the cost of reading a random page and seq_page_cost abstacts the same for a sequential read. In real world scenarios, setting the random_page_cost to a lower value can help get better plans, since modern day disks have reduced the gap between random page read cost and sequential page read costs. 14. effective_cache_size Sets the planner's assumption about the effective size of the disk cache that is available to a single query. Should be set to 75% of the RAM. 15. Cases Some examples of situations where the planner decisions can lead to a bad plan. Some cases are not strictly related to the query planner, but are fun in general! 16. WITH One of the classic cases. WITH is an optimization barrier. 17. Why does WITH face performance issues? We have to understand how WITH is processed by the planner. The query planner intentionally runs the WITH part first. It divides the query into separate plans, which doesnt match up with the pull architecture of the executor. 18. Solution Use JOINS instead. 19. Solution Use JOINS instead. 20. Feature Alert! The current implementation approach for GROUPING SETS uses CTE based architecture. Hence, it suffers from the same problem of divided query plans. If this problem is solved, we could have an efficient implementation of GROUPING SETS in postgres, which would be very nice. 21. Materialized Views Joining a materialized view back to its parent table is expensive, since the planner does not know about any correlation between the two and the two tables are dependent on each other. 22. Solution Use intermediate tables to store the intermediate results. 23. Aggregates A function call like Agg(distinct) does not allow the planner to use a HashAgg node, hence leading to slower query plans. 24. Solution Use subqueries instead. 25. EXISTS and NOT EXISTS Using EXISTS and NOT EXISTS in your queries is a good idea and will generally lead to better plans. Use them instead of complete queries if you only want to test presence or absence of specific values. 26. Reason EXISTS and NOT EXISTS allow the planner to use semi-joins and anti-joins, respectively. These are much efficient than full joins for just testing the presence or absence of values. 27. Prepared Statements Prepared statements are constantly a point of query performance lags. They perform much better now, though, thanks to some recent work done. 28. What is bad with prepared statements? Pre 9.2, user had a choice of using prepared statements and using generic plans, or not use them and re plan every time. Post 9.2, user doesnt have that restriction, but the downside is that the user doesnt have a control on whether a prepared statement is replanned. 29. Pre 9.2 Prepared statements are good for workloads where the selectivity of the queries is in a small range. Widely varying selectivity values can lead to bad performance for prepared statements. 30. Post 9.2 The point to look at post 9.2 is if the planner is overlooking a better alternate plan and choosing a sub optimal plan. 31. Example select * from sometable where id = any ($1); where $1 was a small array. The generic plan for this is of course trivial and would have worked fine. But the planner has to estimate the costs of the generic plan without knowing how many elements are in the array. 32. Example continued It estimated a value of size of the array for the generic plan which was much higher than the actual size of the array. What that meant was that the generic plan was about twice the cost of the custom plan, even though it was the exact same plan. 33. Take away lesson You need to be wary when using prepared statements, pre 9.2 or post 9.2. Prepared statements are much better than before, but some surprise may still turn up. 34. Join Reordering The planner may reorder the joins present in a query based on some heuristics that it thinks will lead to better plans. Generally, this is a good idea. We should not assume that we have a better idea about the optimal join order. Still, if we are inclined to do so, we have a setting known as join_collapse_limit 35. Gotcha! We have a catch here, though. Setting join_collapse_limit can lead to suboptimal plans, but faster planning time durations. If you dont care as much about suboptimal plans as much you care about faster planner speeds, set join_collapse_limit to a lower value. 36. Gemetic Query Optimizer The genetic query optimizer, or GEQO, is used when the number of tables involved in a join exceed a specific limit. GEQO is used when searching the entire search space for executing the join is not feasible. 37. Genetic Query Optimizer Continued GEQO uses genetic algorithms to generate query plans and evaluate their fitness, generating better plans in each successive population. 38. GEQO Gotcha GEQO is useful for generating plans when the number of tables is high. It is also useful for generating plans quickly. However, it can lead to suboptimal plans. Lower geqo_threshold if you care much about the planner speed rather than the optimality of the query plan. 39. Note about join_collapse_limit and geqo_threshold Since the default value for geqo_threshold is higher than default value of join_collapse_limit, GEQO isnt used by default for explicit join syntax. 40. Lack of planner heuristics Refer to 'Planner Conceptual Error when Joining a Subquery -- Outer Query Condition not Pulled Into Subquery' on performance list It tells us that the planner does not understand that 'a = b' and (b = c OR b = d) is equivalent to (a = c OR a = d) Planner lacks such heuristics. Another example is SELECT * from foo WHERE (foo.a + 0 = foo.a); 41. So, queries like... 42. SELECT * FROM parent INNER JOIN child ON child.parent_id = parent.id LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c FROM parent INNER JOIN child ON child.parent_id = parent.id WHERE child.data > 25 GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin = parent.bin WHERE parent.bin IN (1, 2) AND agg1.c >= 3; 43. Should be changed to... 44. Notice the 'bin = 1' here SELECT * FROM parent INNER JOIN child ON child.parent_id = parent.id LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c FROM parent INNER JOIN child ON child.parent_id = parent.id WHERE child.data > 25 GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin = parent.bin WHERE parent.bin = 1 AND agg1.c >= 3; 45. General Advice 1 Use tables to store intermediate results and process over them wherever possible. This allows for avoiding pitfalls such as materialized view joins, UNION pitfalls, DISTINCT pitfalls among others. 46. General Advice 2 Whenever the planner's accuracy about plan costs seems to be reducing, run ANALYZE. This shall update the planner statistics and may lead to better results. 47. General Advice 3 When the accuracy of planner's costs doesnt seem right, you can consider increasing the statistics target (No. Of MCV values stored) by using the following command: ALTER COLUMN SET STATISTICS 48. Future developments to the query planner The query planner still lacks some features. Understanding of foreign keys, inner join removal, more heuristics. Hopefully, we will have them all someday. CTE planning needs to be improved and done in line with postgres executor's data pull model. 49. Case Study Adding foreign keys understanding to planner In context of removal of inner joins. The main issue identified is that if a row is deleted, in the time duration from the deletion of the row and running of the FK trigger, all plans generated and dependent on that row's foreign key will be invalid. 50. Thank You!