Hi team,
I have a spark process that uses s3 as object storage.
This application runs frequently ( multiple times a day) writing thousands of objects on a S3 encrypted bucket as part of a Hive external table.
It is using multipart upload to write them in a component which is not handled by us ( to give brief context: S3 "magic" committers).
The process works for 99% of times, but from time to time, there are intermittent errors that are impacting the object write, making the application to fail.
should I provide a extended request id to troubleshoot the issue?
thank you.