Pyspark dataframe as RecordIO protobuf

0

I want to save my pyspark dataframe in RecordIO protobuf format. I am using Amazon EMR to run my pyspark scripts, and I want to use AWS SageMaker to train a machine learning model. SageMaker pipe mode only accept RecordIO protobuf as input, hence my question

I have tried to save my pyspark dataframe as recordio protobuf as the following:

output_path = f"s3://my_path/output_processed" 
df_transformed.write.format("sagemaker").mode("overwrite").save(output_path)

But when I run the sagemaker model I get an error of missing values eventhough my dataframe does not have missing values. Any idea what might help?

Omar
demandé il y a 5 mois180 vues
1 réponse
0

HI,

For missing value error, validate your data and ensure that the data types of columns in your dataset match the expected data types. Also check for any unexpected values or outliers in your dataset. When using RecordIO format, review the serialization process to ensure that it accurately captures all data points without misinterpreting or excluding any, leading to a perceived missing value issue. You can start with small data size and validate the data integrity checks. For more data preparation guide, you can refer Prepare data with advanced transformations documentation.

I hope it helps.

profile pictureAWS
BezuW
répondu il y a 5 mois

Vous n'êtes pas connecté. Se connecter pour publier une réponse.

Une bonne réponse répond clairement à la question, contient des commentaires constructifs et encourage le développement professionnel de la personne qui pose la question.

Instructions pour répondre aux questions