- Newest
- Most votes
- Most comments
With regards to express workflow, I was referring only to what you do inside the loop, and not the entire state machine.
You are correct that using the nested route just delays the issue. It is a good solution if you have a max number of iterations, which is not the case in your state machine.
Given your situation, I would say the best approach will be to count the number of iterations. Do some checking to find out how many events are emitted in each loop and just start a new state machine when you reach the limit.
Another approach might be to use EventBridge scheduler. When you need to start an execution, you create a repeating schedule that invokes your state machine. The state machine only runs a single iteration and exits. If it is done, it deletes the schedule. This will work only if your wait state waits in increments of minutes.
There is no simple way to get the number of events in the execution history. What I would recommend is that you use nested workflows to start with, I am not familiar with your state machine, but I assume it has some sort of loop. In this case, either use the new Distributed Map state, which runs each iteration in its own Map Run, with its own history limit, or just invoke a nested workflow, which also has its own limit. Further more, you can choose (if appropriate for your use case), to use Express workflows for the nested ones, which do not have a limit at all (they save the history to CloudWatch Logs).
To add on above, here you can see the recommendation: https://docs.aws.amazon.com/step-functions/latest/dg/bp-history-limit.html
Yes, my Step Function (see image) is essentially a big loop for monitoring IoT devices' status updates. The idea is that the Step Function keeps looping until the device is being used: so basically there is a step function machine execution for each device usage "session", that can last up to 24 hours.
I'm afraid that Express workflows are not suitable for my case (mainly because their execution can last up to five minutes).
I think that nesting workflows would mitigate but not solve the problem as it would slow down (but not stop) the growth of the execution history.
I'm not necessarily looking for a "simple" way to get the number of events in the execution history: I'm "just" looking for an "efficient" one.
Another quick and (very) dirty way to solve the problem would be to manually increase a counter each time a state is traversed, but I'd like to use a cleaner approach (the info I'm looking for must already be stored somewhere since it triggers the execution failure).
Relevant content
- Accepted Answerasked 2 years ago
- Accepted Answerasked 3 years ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 9 months ago
- AWS OFFICIALUpdated a year ago
Thanks. I think I'll modify my step function so that:
Just note that not each state has the number of log entries. Some states might have 2 entries, some might have more.