Pyspark dataframe as RecordIO protobuf

0

I want to save my pyspark dataframe in RecordIO protobuf format. I am using Amazon EMR to run my pyspark scripts, and I want to use AWS SageMaker to train a machine learning model. SageMaker pipe mode only accept RecordIO protobuf as input, hence my question

I have tried to save my pyspark dataframe as recordio protobuf as the following:

output_path = f"s3://my_path/output_processed" 
df_transformed.write.format("sagemaker").mode("overwrite").save(output_path)

But when I run the sagemaker model I get an error of missing values eventhough my dataframe does not have missing values. Any idea what might help?

Omar
質問済み 5ヶ月前182ビュー
1回答
0

HI,

For missing value error, validate your data and ensure that the data types of columns in your dataset match the expected data types. Also check for any unexpected values or outliers in your dataset. When using RecordIO format, review the serialization process to ensure that it accurately captures all data points without misinterpreting or excluding any, leading to a perceived missing value issue. You can start with small data size and validate the data integrity checks. For more data preparation guide, you can refer Prepare data with advanced transformations documentation.

I hope it helps.

profile pictureAWS
BezuW
回答済み 5ヶ月前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ