1 Risposta
- Più recenti
- Maggior numero di voti
- Maggior numero di commenti
0
In a visual job, you can use an S3 source and specify JSON format, with an optional JsonPath, or due the same reading from a JSON table that you can build using a crawler.
Once the source reads the data as you need, then you can store it on DynamoDB to use as the source of other jobs.
Contenuto pertinente
- AWS UFFICIALEAggiornata 2 anni fa
- AWS UFFICIALEAggiornata un anno fa
- AWS UFFICIALEAggiornata un anno fa
- Come posso risolvere gli errori di connessione di Marketplace AWS nei miei processi ETL in AWS Glue?AWS UFFICIALEAggiornata 3 mesi fa
Thanks for your respon! but sorry i think i will give you my example flow if using table form (Structured data)
S3 (csv) [Sources] -> Select fields [Transforms] -> SQL Query [Transforms] -> S3 (csv) [targets]
but now the source data is using Json format, which is i need too transform it into table/structured data so i can do data transformation using my SQL Query.
I'm new with AWS, usually i just use ETL tools like pentaho/talend. so i prefer use visual job than the script one.
the "source" transfoms the data into a structured in memory table, which you can see in the "Data preview" panel and from then transform, run SQL or whatever you need