Hey! I have a setup currently with a crawler that connects to a PostgreSQL database with JDBC, this works and the crawler generates around 20 tables for this database.
I now want to create an ETL job that extracts the data from one table to a S3 bucket, and make S3 data queryable in Athena. I now have a ETL Glue flow that looks like this:
This seems to work except no Table is being created in the database. The S3 target location contains the parquet files and the job succeeds. But no table.
The auto generated spark code looks like this:
S3bucket_node3 = glueContext.getSink(
path="s3://data-lake",
connection_type="s3",
updateBehavior="UPDATE_IN_DATABASE",
partitionKeys=[],
enableUpdateCatalog=True,
transformation_ctx="S3bucket_node3",
)
S3bucket_node3.setCatalogInfo(
catalogDatabase="postgres_glue_database", catalogTableName="tableName"
)
S3bucket_node3.setFormat("glueparquet")
S3bucket_node3.writeFrame(ApplyMapping_node2)
job.commit()
Does anybody have any idea on where to look? There seems to be nothing wrong with the connection/crawler/bucket permissions. Its just not creating a table for the data its written to the bucket.
I tried:
- recreating the bucket / roles
- giving the table other names
- adding and removing additional Input arguments
Thanks in advance!