Help us improve the AWS re:Post Knowledge Center by sharing your feedback in a brief survey. Your input can influence how we create and update our content to better support your AWS journey.
如何將查詢計畫與 Amazon Redshift 中的查詢報告相關聯?
我想要將查詢計畫與 Amazon Redshift 叢集中的查詢報告相關聯。
簡短描述
若要確認在 Amazon Redshift 中執行查詢所需的使用方式,請執行 EXPLAIN 命令。從 EXPLAIN 命令傳回的執行計畫會概述所涉及的查詢計畫和執行步驟。然後,使用 SVL_QUERY_REPORT 系統視圖來檢視叢集片段層級的查詢資訊。您可以使用片段層級資訊來偵測叢集中可能影響查詢效能的不均勻資料分佈。
Amazon Redshift 處理查詢計畫,並將計畫轉換為步驟、區段和串流。如需詳細資訊,請參閱查詢規劃和執行工作流程。
解決方案
建立表格並取得查詢的執行計畫和 SVL 查詢報告
若要建立表格並取得執行計畫和 SVL 查詢報告,請完成下列步驟:
-
在未對分佈索引鍵執行聯結操作的情況下執行下列查詢:
select eventname, sum (pricepaid) from sales, event where sales.eventid = event.eventid group by eventname order by 2 desc;此查詢將內部表格分佈到所有計算節點。
-
擷取查詢計畫:
EXPLAIN <query>; QUERY PLAN -------------------------------------------------------------------------------------------------------- XN Merge (cost=1002815368414.24..1002815368415.67 rows=571 width=27) Merge Key: sum(sales.pricepaid) -> XN Network (cost=1002815368414.24..1002815368415.67 rows=571 width=27) Send to leader -> XN Sort (cost=1002815368414.24..1002815368415.67 rows=571 width=27) Sort Key: sum(sales.pricepaid) -> XN HashAggregate (cost=2815368386.67..2815368388.10 rows=571 width=27) -> XN Hash Join DS_BCAST_INNER (cost=109.98..2815367496.05 rows=178125 width=27) Hash Cond: ("outer".eventid = "inner".eventid) -> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=14) -> XN Hash (cost=87.98..87.98 rows=8798 width=21) -> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=21) (12 rows) -
執行 SVL_QUERY_REPORT 查詢以取得查詢報告:
select * from svl_query_report where query = query_id order by segment, step, elapsed_time, rows;**注意事項:**將 query_id 取代為您的查詢 ID。
將查詢計畫與查詢報告映射
若要將查詢計畫與查詢報告映射,請完成下列步驟:
- 執行以下查詢以取得區段值為 0 的 svl_query_report 查詢:
下列是範例輸出:select query,slice,segment,step,start_time,end_time,elapsed_time,rows,bytes,label from svl_query_report where query = 938787 and segment = 0 order by segment, step, elapsed_time, rows; EXPLAIN <query>; -> XN Hash (cost=87.98..87.98 rows=8798 width=21) -> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=21)
在上一個輸出中,當區段值為 0 時,Amazon Redshift 會執行連續掃瞄操作來掃描事件表。您可以在標籤資料欄中找到連續掃描操作。query | slice | segment | step | start_time | end_time | elapsed_time | rows | bytes | label -------+-------+---------+------+----------------------------+---------------------------+--------------+------+--------+------------------------------ 938787 | 0 | 0 | 0 | 2020-05-22 11:11:48.828309 | 2020-05-22 11:11:48.82987 | 1561 | 4383 | 128626 | scan tbl=278788 name=event 938787 | 1 | 0 | 0 | 2020-05-22 11:11:48.828309 | 2020-05-22 11:11:48.82987 | 1561 | 4415 | 128918 | scan tbl=278788 name=event 938787 | 0 | 0 | 1 | 2020-05-22 11:11:48.828309 | 2020-05-22 11:11:48.82987 | 1561 | 4383 | 0 | project 938787 | 1 | 0 | 1 | 2020-05-22 11:11:48.828309 | 2020-05-22 11:11:48.82987 | 1561 | 4415 | 0 | project 938787 | 0 | 0 | 2 | 2020-05-22 11:11:48.828309 | 2020-05-22 11:11:48.82987 | 1561 | 4383 | 126660 | bcast ... (6 rows) - 執行下列查詢以取得區段值為 1 的 svl_query_report 查詢:
下列是範例輸出:select query,slice,segment,step,start_time,end_time,elapsed_time,rows,bytes,label from svl_query_report where query = 938787 and segment = 1 order by segment, step, elapsed_time, rows;
查詢會繼續執行,直到區段值變成 1 為止。在聯結中的內部表格上執行雜湊表操作。query | slice | segment | step | start_time | end_time | elapsed_time | rows | bytes | label -------+-------+---------+------+---------------------------+----------------------------+--------------+------+--------+------------------------------------------- 938787 | 1 | 1 | 0 | 2020-05-22 11:11:48.826864 | 2020-05-22 11:11:48.830037 | 3173 | 0 | 0 | scan tbl=376297 name=Internal Worktable 938787 | 0 | 1 | 0 | 2020-05-22 11:11:48.826864 | 2020-05-22 11:11:48.831142 | 4278 | 8798 | 253580 | scan tbl=376297 name=Internal Worktable 938787 | 1 | 1 | 1 | 2020-05-22 11:11:48.826864 | 2020-05-22 11:11:48.830037 | 3173 | 0 | 0 | project 938787 | 0 | 1 | 1 | 2020-05-22 11:11:48.826864 | 2020-05-22 11:11:48.831142 | 4278 | 8798 | 0 | project 938787 | 1 | 1 | 2 | 2020-05-22 11:11:48.826864 | 2020-05-22 11:11:48.830037 | 3173 | 0 | 0 | hash tbl=439 ... (6 rows) - 執行下列查詢以取得區段值為 2 的 svl_query_report 查詢:
下列是範例輸出:select query,slice,segment,step,start_time,end_time,elapsed_time,rows,bytes,label from svl_query_report where query = 938787 and segment = 2 order by segment, step, elapsed_time, rows; EXPLAIN <query>; -> XN Hash Join DS_BCAST_INNER (cost=109.98..2815367496.05 rows=178125 width=27) Hash Cond: ("outer".eventid = "inner".eventid) -> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=14)
在上一個範例中,查詢在區段值為 2 時執行查詢,並執行連續掃瞄操作以掃瞄銷售表格。在相同的區段中,系統會執行彙總操作來彙總結果,然後執行雜湊聯結操作來連接表格。其中一個表格的聯結資料欄不是分佈索引鍵或排序索引鍵。因此,內部表格會作為 DS_BCAST_INNER 分佈到所有計算節點。然後,您可以在執行計畫中查看內部表格。您也可以執行此查詢以取得區段值為 3、4 和 5 的 SVL_QUERY_REPORT 查詢。query | slice | segment | step | start_time | end_time | elapsed_time | rows | bytes | label --------+-------+---------+------+----------------------------+----------------------------+--------------+-------+---------+------------------------------ 938787 | 1 | 2 | 0 | 2020-05-22 11:11:48.839297 | 2020-05-22 11:11:48.865857 | 26560 | 86519 | 1730380 | scan tbl=278792 name=sales 938787 | 0 | 2 | 0 | 2020-05-22 11:11:48.838371 | 2020-05-22 11:11:48.865857 | 27486 | 85937 | 1718740 | scan tbl=278792 name=sales 938787 | 1 | 2 | 1 | 2020-05-22 11:11:48.839297 | 2020-05-22 11:11:48.865857 | 26560 | 86519 | 0 | project 938787 | 0 | 2 | 1 | 2020-05-22 11:11:48.838371 | 2020-05-22 11:11:48.865857 | 27486 | 85937 | 0 | project 938787 | 1 | 2 | 2 | 2020-05-22 11:11:48.839297 | 2020-05-22 11:11:48.865857 | 26560 | 86519 | 0 | project 938787 | 0 | 2 | 2 | 2020-05-22 11:11:48.838371 | 2020-05-22 11:11:48.865857 | 27486 | 85937 | 0 | project 938787 | 1 | 2 | 3 | 2020-05-22 11:11:48.839297 | 2020-05-22 11:11:48.865857 | 26560 | 86519 | 0 | hjoin tbl=439 938787 | 0 | 2 | 3 | 2020-05-22 11:11:48.838371 | 2020-05-22 11:11:48.865857 | 27486 | 85937 | 0 | hjoin tbl=439 938787 | 1 | 2 | 4 | 2020-05-22 11:11:48.839297 | 2020-05-22 11:11:48.865857 | 26560 | 86519 | 0 | project 938787 | 0 | 2 | 4 | 2020-05-22 11:11:48.838371 | 2020-05-22 11:11:48.865857 | 27486 | 85937 | 0 | project 938787 | 1 | 2 | 5 | 2020-05-22 11:11:48.839297 | 2020-05-22 11:11:48.865857 | 26560 | 86519 | 0 | project 938787 | 0 | 2 | 5 | 2020-05-22 11:11:48.838371 | 2020-05-22 11:11:48.865857 | 27486 | 85937 | 0 | project 938787 | 1 | 2 | 6 | 2020-05-22 11:11:48.839297 | 2020-05-22 11:11:48.865857 | 26560 | 576 | 34916 | aggr tbl=448 938787 | 0 | 2 | 6 | 2020-05-22 11:11:48.838371 | 2020-05-22 11:11:48.865857 | 27486 | 576 | 34916 | aggr tbl=448 ... (16 rows)
在這些區段中,系統會執行雜湊彙總操作和排序操作,並從標籤「aggr」和「sort」予以識別。系統在未排序的群組彙總函數上執行雜湊彙總操作。系統會執行排序操作來評估 ORDER BY 子句。
使用所有區段之後,查詢會在區段 4 和 5 上執行網路操作,以將中繼結果傳送至引導節點。系統會將結果傳送到引導節點以進行其他處理。您可以查看「傳回」標籤的結果。
查詢完成後,執行下列查詢,以對查詢的執行時間 (以毫秒為單位) 進行檢查:
select datediff (ms, exec_start_time, exec_end_time) from stl_wlm_query where query= 938787; date_diff ----------- 101 (1 row)
最佳化您的查詢
在分析查詢計畫時,您可以根據使用案例來調整查詢效能。如需詳細資訊,請參閱 Amazon Redshift 的 10 大效能調整技術。
相關資訊
- 語言
- 中文 (繁體)

相關內容
- 已提問 1 年前
AWS 官方已更新 6 個月前