1 Answer
- Newest
- Most votes
- Most comments
1
That's a fair point about clarifying the documentation, the history is based on the historical metric, if the rule wasn't applied then the metric wasn't created, the rule cannot backprocess the data to calculate the previous history if the rule wasn't applied.
HI can you please see if you have any thoughts on this question ?https://repost.aws/questions/QU88DiiUZ1STmIjySismV2Ow/how-does-aws-glue-data-quality-custom-sql-work-with-no-unique-column
Relevant content
- asked a year ago
- asked 10 months ago
- asked 3 months ago
- asked 9 days ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
Yep, it sounds it thinks the average is 100, not sure why. What did it say on the previous evaluations?
How did you make it process different files on different runs?, do you have any count that the job actually read them all? (maybe it has thrown away invalid rows)
This worked on 5th time. I think the issue is, in my first 3 runs this rule was not there. I added this rule in 4th run only. So it didnt took earlier runs into consideration. But this thing is not mentioned anywhere in documentation. There is no mention how Glue stores 'state' of runs.