1回答
- 新しい順
- 投票が多い順
- コメントが多い順
0
Hello,
I would like to inform you that MONEY[] is not supported due to the JDBC driver for PostgreSQL can’t handle those types properly. To process this kind of data, please cast the column to decimal type and make sure to verify it using printschema. B
For dynamic frame you can use Applymapping:
## To verify the datatype of the column
datasource0.printtSchema()
## Cast it to decimal using ApplyMapping
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("v1", "string", "v1", "string"),("XX_threshold", "double", "XX_threshold", "decimal(25,10)")], transformation_ctx = "applymapping1")
##Verify the schma if it is counted to Decimal or not.
applymapping1.printSchema()
applymapping1.toDF().show()
For dataframe you can use cast function:
from pyspark.sql.types import *
df=datasource0.toDF()
df2 = df.withColumn("age",col("age").cast("decimal(25,10)"))
df2.printSchema()
df2.show()
回答済み 2年前
関連するコンテンツ
- 質問済み 6年前
- 質問済み 8ヶ月前
- AWS公式更新しました 1年前
- AWS公式更新しました 3年前
Thank you your response. However this doesn't work, I tried the ApplyMapping before (even casting to string), and executing applymapping1.toDF().show() gives me the same error:
... Caused by: org.postgresql.util.PSQLException: Bad value for type double : ...
Is like even doing transformations AWS Glue keeps the original datatype from the crawler, at least thats the behaviour that I experience until now.