- 최신
- 최다 투표
- 가장 많은 댓글
First of let us understand one lambda event filters is not designed to filter same message multiple times and after every successful single/batch invocation the messages are cleared unless you explicitly delete them if there is an abrupt error handling.
This is needed to avoid reprocessing the unmatched messages again and it may clog your batch size and the lambda filter runs endlessness not finding suitable records to process in the head.
Then why do we have this filter you may ask, this is to make sure we are processing the right message saving our compute time and space. Since SQS could not filter by itself.
Also SQS is more suitable for one consumer kind architecture, if you would like to have another consumer it is better to have multiple SQS queues.
To have separate queue you could initially create an event bridge event with the same payload and then use **event bridge rule **to move the message to right SQS and them use filter pattern with single/batch invocation strategy.
관련 콘텐츠
- AWS 공식업데이트됨 일 년 전
- AWS 공식업데이트됨 2년 전
In addition to this answer, there is a little sentence in the feature announcement that I missed. It is stated
an event/payload not matching any of the filtering criteria will be dropped
.The behavior is expected and enforces the idea that the purpose of the filter is to process only the wanted messages to reduce unwanted Lambda invocations.