1 Antwort
- Neueste
- Die meisten Stimmen
- Die meisten Kommentare
0
Hello,
I would like to inform you that MONEY[] is not supported due to the JDBC driver for PostgreSQL can’t handle those types properly. To process this kind of data, please cast the column to decimal type and make sure to verify it using printschema. B
For dynamic frame you can use Applymapping:
## To verify the datatype of the column
datasource0.printtSchema()
## Cast it to decimal using ApplyMapping
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("v1", "string", "v1", "string"),("XX_threshold", "double", "XX_threshold", "decimal(25,10)")], transformation_ctx = "applymapping1")
##Verify the schma if it is counted to Decimal or not.
applymapping1.printSchema()
applymapping1.toDF().show()
For dataframe you can use cast function:
from pyspark.sql.types import *
df=datasource0.toDF()
df2 = df.withColumn("age",col("age").cast("decimal(25,10)"))
df2.printSchema()
df2.show()
beantwortet vor 2 Jahren
Relevanter Inhalt
- AWS OFFICIALAktualisiert vor einem Jahr
- AWS OFFICIALAktualisiert vor 2 Jahren
- AWS OFFICIALAktualisiert vor 3 Jahren
- AWS OFFICIALAktualisiert vor 2 Jahren
Thank you your response. However this doesn't work, I tried the ApplyMapping before (even casting to string), and executing applymapping1.toDF().show() gives me the same error:
... Caused by: org.postgresql.util.PSQLException: Bad value for type double : ...
Is like even doing transformations AWS Glue keeps the original datatype from the crawler, at least thats the behaviour that I experience until now.