- Neueste
- Die meisten Stimmen
- Die meisten Kommentare
Yes, IF you configured the crawler appropriately. See https://docs.aws.amazon.com/glue/latest/dg/crawler-configuration.html for details.
Also, it could be possible that you do NOT need to run the crawler separately, and instead update the catalog as a part of your pySpark script. See https://docs.aws.amazon.com/glue/latest/dg/update-from-job.html
Decimal and double are not really compatible types on the way they are serialized, not even decimals with different precisions are.
Whichever the crawler decides to use for the table, I think that table will be broken if you have a mixture of those types and some numbers will be incorrect.
To do that kind of change you should recreate the table with the new type, after deleting the old files.
Also, if you generate the data in your own job, it should update the table during writing, no point on waiting for a crawler to make a guess when the job knows the schema.
Relevanter Inhalt
- AWS OFFICIALAktualisiert vor 2 Jahren
- AWS OFFICIALAktualisiert vor einem Jahr
- AWS OFFICIALAktualisiert vor 2 Jahren
- AWS OFFICIALAktualisiert vor 2 Jahren