1回答
- 新しい順
- 投票が多い順
- コメントが多い順
0
Yes, you would need to run crawler again for changes to be picked up and updated in Glue catalog tables. When Glue crawler runs, it analyzes data sources folders/files etc specified and generates tables in Glue Data Catalog based on underlying schemas/data it detects. These catalog tables then will be metadata which can be used as ETL jobs, Athena queries etc.
Crawler only analyzes data and creates/updates tables during actual crawler runs. Any changes made to underlying data sources will not be reflected in Glue data catalog unless crawler re-runs. Thus, in your case re-run the crawler so existing Glue job can see new updated file/schema structures
関連するコンテンツ
- 質問済み 9ヶ月前
- AWS公式更新しました 3年前
- AWS公式更新しました 3年前
- AWS公式更新しました 3年前
- AWS公式更新しました 2年前