StackOverflowError on joins in AWS Glue

0

Hello,

We are trying to join some dataframes in Glue using Spark und Python.

The dataframes are created from the same source table, but since we are using like 1000 withColumn operations to rename, divide and add column values, we need to split the tables using select, otherwise the runtime blows up.

Writing all those single dataframes to AWS Glue Catalog (using Iceberg as table format) works for a dataframe with 2(!) rows. For dataframes with 30+ rows our job fails with an StackOverflowError when using the join method six times (as shown below). Joining only two dataframes works. All dataframes have the exact same number of rows, but vary in number of columns (between 20 and 100 columns).

result_table = kpi_results[0]["data"].join(kpi_results[1]["data"], on=join_columns, how="inner")
.join(kpi_results[2]["data"], on=join_columns, how="inner")
.join(kpi_results[3]["data"], on=join_columns, how="inner")
.join(kpi_results[4]["data"], on=join_columns, how="inner")
.join(kpi_results[5]["data"], on=join_columns, how="inner")
.join(kpi_results[6]["data"], on=join_columns, how="inner")
.na.fill(0)

kpi_results is a list of dictionaries where the data key holds a dataframes and another key names which columns, separated by business logic, are in this dataframe. Since this code works for a dataframe containing 2 rows, this should not be an issue.

We are using 12 DPUs on a G.2X worker type with the following configs set: —conf spark.driver.maxResultSize=28g —conf spark.driver.memory=28g —conf spark.executor.pyspark.memory=28g —conf spark.executor.memory=28g —conf spark.executor.extraJavaOptions=-Xss512m —conf spark.driver.extraJavaOptions=-Xss512m

SparkUI indicates that there isn’t any memory issue, since every executor has enough memory and the input data is 26 MiB.

Julian
質問済み 7ヶ月前358ビュー
1回答
0

The stacktrace will give you key information about what is overflowing and if it is planning (more likely) or execution.
Often you can workaround that just by saving the joined DF on a variable and then joining that instead of chaining many joins with many columns.

profile pictureAWS
エキスパート
回答済み 7ヶ月前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ