Cron job Elastic Beanstalk issue

0

Here is my Python code for two cron jobs:

 scheduler.add_job(
        func=process_users_main,
        trigger="cron",
        minute="0/30"
    )
    scheduler.add_job(
        func=func2,
        trigger="cron",
        hour="0/3",
        minute=15
    )

I am currently hosting this on AWS Elastic Beanstalk.

This works fine for about 24hrs but then the system just seems to crash with this error: Execution of job "main (trigger: cron[minute='0/30'], next run at: 2023-08-10 17:30:00 UTC)" skipped: maximum number of running instances reached (1)

For reference, these cron jobs are not very computationally heavy (are quite fast/easy to run). What I do not understand is why it works for about 24hrs before it breaks.

Should I consider using AWS lambda for these cron jobs instead?

Here is my configuration for the scaling of the Elastic Beanstalk instances... Enter image description here Enter image description here Enter image description here Enter image description here

2개 답변
1

Could be the Auto Scaling group issue, how many instances do you have when running 2 cron jobs and how many instances ASG allows? https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.managing.ec2.html

profile picture
답변함 9달 전
  • Is there any way for me to check how/why Elastic Beanstalk was trying to autoscale? Because I do not understand why it was working perfectly fine for almost 24hrs and then suddenly needed to auto scale (and could not) so the corn job/scheduling function stopped working.

  • I also added my configuration to the above for the auto scaling. Would love any suggestions. Thanks

1

Current status: 1 min, 4 max

Can you check how many instances do you have when the error occurs?

profile picture
답변함 9달 전
  • Thanks for your reply. How would I do this (I did not set up the load balancer logs before)?

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠