Cron job Elastic Beanstalk issue

0

Here is my Python code for two cron jobs:

 scheduler.add_job(
        func=process_users_main,
        trigger="cron",
        minute="0/30"
    )
    scheduler.add_job(
        func=func2,
        trigger="cron",
        hour="0/3",
        minute=15
    )

I am currently hosting this on AWS Elastic Beanstalk.

This works fine for about 24hrs but then the system just seems to crash with this error: Execution of job "main (trigger: cron[minute='0/30'], next run at: 2023-08-10 17:30:00 UTC)" skipped: maximum number of running instances reached (1)

For reference, these cron jobs are not very computationally heavy (are quite fast/easy to run). What I do not understand is why it works for about 24hrs before it breaks.

Should I consider using AWS lambda for these cron jobs instead?

Here is my configuration for the scaling of the Elastic Beanstalk instances... Enter image description here Enter image description here Enter image description here Enter image description here

Charles
已提問 9 個月前檢視次數 394 次
2 個答案
1

Could be the Auto Scaling group issue, how many instances do you have when running 2 cron jobs and how many instances ASG allows? https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.managing.ec2.html

profile picture
已回答 9 個月前
  • Is there any way for me to check how/why Elastic Beanstalk was trying to autoscale? Because I do not understand why it was working perfectly fine for almost 24hrs and then suddenly needed to auto scale (and could not) so the corn job/scheduling function stopped working.

  • I also added my configuration to the above for the auto scaling. Would love any suggestions. Thanks

1

Current status: 1 min, 4 max

Can you check how many instances do you have when the error occurs?

profile picture
已回答 9 個月前
  • Thanks for your reply. How would I do this (I did not set up the load balancer logs before)?

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南