AWS re:Post을(를) 사용하면 다음에 동의하게 됩니다. AWS re:Post 이용 약관

AWS Glue crawler creating multiple tables

0

I have following S3 structure and want to craw the parquet files

bucket/basefolder
    subfolder1
        logfolder
            log1.json
        file1.parquet
    subfolder2
        logfolder
            log2.json
        file2.parquet
        file3.parquet

I have used exclude pattern as below to exclude unwanted files.

**/logfolder/**

So these are the tables the crawler got:

file1.parquet
file2.parquet
file3.parquet

How to get just one table? All these parquet files have exact same schema, there is no partition. I have also ticked the check box : Create a single schema for each S3 path in crawler settings, but the result is the same.

질문됨 3년 전4.7천회 조회
3개 답변
0

This might be due to data compatibility issue. By default, when a crawler defines tables for data stored in Amazon S3, it considers both data compatibility and schema similarity. Since you already selected option “Create a single schema for each S3 path”, schema similarity will be ignored in this case but it will still check for data compatibility. Please check here for more information: https://docs.aws.amazon.com/glue/latest/dg/crawler-configuration.html#crawler-grouping-policy

If the crawler identifies that data is not compatible even in a single file it will create a table for each file. Please open a support case with Glue team and provide crawler name, region and sample data (if possible) for us to troubleshoot further.

AWS
답변함 3년 전
  • I have tried check and uncheck that option (Create a single schema for each S3 path), the result is the same.

0

You are essentially creating the same schema twice, as you've already selected single schema. Crawlers take into consideration both the data compatibility and schema similarity, since your data compatibility is met it won't create another table. If however, the data wasn't compatible it will then create a table for each file.

AWS
답변함 3년 전
  • I have tried check and uncheck that option (Create a single schema for each S3 path), the result is the same.

0

I may want to have a workflow with a pythonshell job that rebuilds your file structure to have bucket/basefolder/logfolder. Then have a crawler in the same workflow crawl the bucket structure. The confusion is coming from the crawler not knowing how to jump between the tags (directories) like that. You can maintain subfolders with partitions but probably not required. You might need to set a table level for the crawler as shown at bottom of this doc page. https://docs.aws.amazon.com/glue/latest/dg/crawler-configuration.html

답변함 3년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인