When AppFlow writes parquet to S3, can it maintain the source datatypes or not?

2

In the AppFlow UI, it appears to indicate that maintaining the source datatypes is possible via the option underneath parquet: Enter image description here

However, in documentation (https://docs.aws.amazon.com/appflow/latest/userguide/s3.html), it states:

If you choose Parquet as the format for your destination file in Amazon S3, the option to aggregate all records into one file per flow run will not be available. When choosing Parquet, Amazon AppFlow will write the output as string, and not declare the data types as defined by the source.

These two sources conflict with each other. The behavior I am seeing is what the documentation describes and all data is being written as string type. I am trying to determine if this is intended or a bug. If the latter, I can open a support ticket.

tjtoll
已提問 1 年前檢視次數 617 次
1 個回答
0

Hi there, As mentioned When choosing Parquet, Amazon AppFlow will write the output as string and not declare the data types as defined by the source. That means it will not declare any data as per source code but will write the data as string and do not declare any other data type. I hope it would be clear now. If you still have queries please feel free to reach us via case. Thank you!

AWS
支援工程師
已回答 1 年前
  • What is the option "Preserve source data types in Parquet format" for? Trying to understand if I can keep source data types somehow.

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南