By using AWS re:Post, you agree to the Terms of Use
/AWS Elastic Beanstalk/

Questions tagged with AWS Elastic Beanstalk

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

SQSD continuously crashing and restarting on Elastic Beanstalk Worker boxes

In my Elastic Beanstalk Worker Environment, sqsd keeps crashing and restarting. We are running a NodeJS app on "Node.js 16 running on 64bit Amazon Linux 2/5.5.3". The app has been working fine up until today. Due to an issue on one box I had to reboot it and when it came back up it just keeps restarting my app and never actually successfully starts up the worker. I found this in /var/log/messages pointing to sqsd crashing. Any ideas how I can fix this? I have tried terminating the instance but that didn't fix it. It's a t2.large so it should have enough memory ``` Jun 23 19:12:35 ip-10-1-100-88 systemd: Stopped This is sqsd daemon. Jun 23 19:12:35 ip-10-1-100-88 systemd: Starting This is sqsd daemon... Jun 23 19:12:36 ip-10-1-100-88 sqsd: Version 2 of the Ruby SDK will enter maintenance mode as of November 20, 2020. To continue receiving service updates and new features, please upgrade to Version 3. More information can be found here: https://aws.amazon.com/blogs/developer/deprecation-schedule-for-aws-sdk-for-ruby-v2/ Jun 23 19:12:38 ip-10-1-100-88 systemd: Reloading. Jun 23 19:12:41 ip-10-1-100-88 sqsd: /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.6.0/gems/aws-sqsd-3.0.4/bin/aws-sqsd:58:in `initialize': No such file or directory @ rb_sysopen - /var/run/aws-sqsd/default.pid (Errno::ENOENT) Jun 23 19:12:41 ip-10-1-100-88 sqsd: from /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.6.0/gems/aws-sqsd-3.0.4/bin/aws-sqsd:58:in `open' Jun 23 19:12:41 ip-10-1-100-88 sqsd: from /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.6.0/gems/aws-sqsd-3.0.4/bin/aws-sqsd:58:in `start' Jun 23 19:12:41 ip-10-1-100-88 sqsd: from /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.6.0/gems/aws-sqsd-3.0.4/bin/aws-sqsd:83:in `launch' Jun 23 19:12:41 ip-10-1-100-88 sqsd: from /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.6.0/gems/aws-sqsd-3.0.4/bin/aws-sqsd:111:in `<top (required)>' Jun 23 19:12:41 ip-10-1-100-88 sqsd: from /opt/elasticbeanstalk/lib/ruby/bin/aws-sqsd:23:in `load' Jun 23 19:12:41 ip-10-1-100-88 sqsd: from /opt/elasticbeanstalk/lib/ruby/bin/aws-sqsd:23:in `<main>' Jun 23 19:12:42 ip-10-1-100-88 systemd: sqsd.service: control process exited, code=exited status=1 Jun 23 19:12:42 ip-10-1-100-88 systemd: Failed to start This is sqsd daemon. Jun 23 19:12:42 ip-10-1-100-88 systemd: Unit sqsd.service entered failed state. Jun 23 19:12:42 ip-10-1-100-88 systemd: sqsd.service failed. Jun 23 19:12:42 ip-10-1-100-88 systemd: sqsd.service holdoff time over, scheduling restart. ```
0
answers
0
votes
10
views
asked 5 days ago

Elastic BeanStalk can't connect to ElastiCache Redis

I'm having issues connecting from Elastic BeanStalk to ElastiCache Redis. When I SSH into the EBS instance and try to use redis-cli to connect, it times out. This is how I set up my environment: I have an existing VPC with two subnets. I created a Security Group specifically for this that has an Inbound rule for IPv4, Custom TCP, port 6379, source 0.0.0.0/0 I created an ElastiCache Redis cluster with the following relevant parameters: Cluster mode: disabled * Location: AWS Cloud, Multi-AZ enabled * Cluster settings: number of replicas - 2 * Subnet group settings: existing subnet group with two associated subnets * Availability Zone placements: no preference * Security: encryption at rest enabled, default key * Security: encryption in transit enabled, no access control * Selected security groups: the one I described above As for the EBS environment, it has this configuration: * Platform: managed, Node.js 16 on Amazon Linux 2 5.5.3 * Instance settings: Public IP address UNCHECKED, both Instance subnets checked * Everything else left default After getting all of that set up, I would SSH into the EBS instance and follow the directions here to install redis-cli and try to connect: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/GettingStarted.ConnectToCacheNode.html I've tried using the Primary endpoint, the Reader endpoint, and all of the individual node endpoints, but I get a timeout error for all of them. Is there some configuration that I'm missing?
1
answers
0
votes
44
views
asked 15 days ago

yum makecache fails in prebuild in Elastic Beanstalk

Hello, I think I found a bug with the most recent version of Elastic Beanstalk for Ruby 2.7 running on 64bit Amazon Linux 2/3.4.7 When I attempt to update to 3.4.7 the command `yum makecache` fails in prebuild because the download checksums don't match. When I revert to 3.4.6 everything works fine. Since my code works on 3.4.6, and because I didn't see any changes in the [release notes](https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2022-05-26-linux.html) for 3.4.7 that would make me think my config had broken, I thought this might be a problem with the release. I wasn't sure where to file a bug report, so I thought I'd post this here in case anybody else runs into this problem. `cfn-init.log` ``` 2022-06-10 00:55:50,430 [INFO] Running config prebuild_0_company_app_staging 2022-06-10 00:55:52,335 [ERROR] Yum makecache failed. Output: Loaded plugins: extras_suggestions, langpacks, priorities, update-motd Not using downloaded amzn2-core/repomd.xml because it is older than what we have: Current : Mon May 23 17:22:40 2022 Downloaded: Thu May 5 02:54:28 2022 https://amazonlinux-2-repos-us-west-2.s3.dualstack.us-west-2.amazonaws.com/2/core/2.0/x86_64/6b0225ccc542f3834c95733dcf321ab9f1e77e6ca6817469771a8af7c49efe6c/repodata/other.sqlite.gz?instance_id=i-00ca3876aeb3b0e00&region=us-west-2: [Errno -1] Metadata file does not match checksum Trying other mirror. https://amazonlinux-2-repos-us-west-2.s3.dualstack.us-west-2.amazonaws.com/2/core/2.0/x86_64/6b0225ccc542f3834c95733dcf321ab9f1e77e6ca6817469771a8af7c49efe6c/repodata/filelists.sqlite.gz?instance_id=i-00ca3876aeb3b0e00&region=us-west-2: [Errno -1] Metadata file does not match checksum Trying other mirror. https://amazonlinux-2-repos-us-west-2.s3.dualstack.us-west-2.amazonaws.com/2/core/2.0/x86_64/6b0225ccc542f3834c95733dcf321ab9f1e77e6ca6817469771a8af7c49efe6c/repodata/filelists.sqlite.gz?instance_id=i-00ca3876aeb3b0e00&region=us-west-2: [Errno -1] Metadata file does not match checksum Trying other mirror. One of the configured repositories failed (Amazon Linux 2 core repository), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=amzn2-core ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable amzn2-core or subscription-manager repos --disable=amzn2-core 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=amzn2-core.skip_if_unavailable=true failure: repodata/filelists.sqlite.gz from amzn2-core: [Errno 256] No more mirrors to try. https://amazonlinux-2-repos-us-west-2.s3.dualstack.us-west-2.amazonaws.com/2/core/2.0/x86_64/6b0225ccc542f3834c95733dcf321ab9f1e77e6ca6817469771a8af7c49efe6c/repodata/filelists.sqlite.gz?instance_id=i-00ca3876aeb3b0e00&region=us-west-2: [Errno -1] Metadata file does not match checksum 2022-06-10 00:55:52,336 [ERROR] Error encountered during build of prebuild_0_company_app_staging: Could not create yum cache (return code 1) Traceback (most recent call last): File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 576, in run_config CloudFormationCarpenter(config, self._auth_config).build(worklog) File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 241, in build self._auth_config) File "/usr/lib/python3.7/site-packages/cfnbootstrap/rpm_tools.py", line 53, in apply raise ToolError("Could not create yum cache", cache_result.returncode) cfnbootstrap.construction_errors.ToolError: Could not create yum cache (return code 1) 2022-06-10 00:55:52,336 [ERROR] -----------------------BUILD FAILED!------------------------ 2022-06-10 00:55:52,336 [ERROR] Unhandled exception during build: Could not create yum cache (return code 1) Traceback (most recent call last): File "/opt/aws/bin/cfn-init", line 176, in <module> worklog.build(metadata, configSets) File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 137, in build Contractor(metadata).build(configSets, self) File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 564, in build self.run_config(config, worklog) File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 576, in run_config CloudFormationCarpenter(config, self._auth_config).build(worklog) File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 241, in build self._auth_config) File "/usr/lib/python3.7/site-packages/cfnbootstrap/rpm_tools.py", line 53, in apply raise ToolError("Could not create yum cache", cache_result.returncode) cfnbootstrap.construction_errors.ToolError: Could not create yum cache (return code 1) ```
1
answers
1
votes
53
views
asked 19 days ago

OpenTelemetry with Elastic Beanstalk

Hi all I'd like to set up AWS Open Distro for OpenTelemetry for Java with Elasic Beanstalk (https://aws-otel.github.io/docs/getting-started/java-sdk/trace-auto-instr). I've added the Java agent (using `java -javaagent:path/to/aws-opentelemetry-agent.jar -jar myapp.jar`). Starting the application, it logs a line like this every 15 seconds: ``` [otel.javaagent 2022-06-08 19:21:03:940 +0000] [OkHttp http://localhost:4317/...] ERROR io.opentelemetry.exporter.internal.grpc.OkHttpGrpcExporter - Failed to export spans. The request could not be executed. Full error message: Failed to connect to localhost/127.0.0.1:4317 ``` So I guess the ADOT Collector (https://aws-otel.github.io/docs/getting-started/collector) isn't running. When I log into my instances (platform: Corretto 11 running on 64bit Amazon Linux 2/3.2.15), I can see the CloudWatch Agent under `/opt/aws/amazon-cloudwatch-agent` (version: 1.247350.0). It includes the Collector under `/opt/aws/amazon-cloudwatch-agent/cwagent-otel-collector`: ``` $ pwd /opt/aws/amazon-cloudwatch-agent $ ls LICENSE NOTICE RELEASE_NOTES THIRD-PARTY-LICENSES bin cwagent-otel-collector doc etc logs var $ ls cwagent-otel-collector/ etc logs var $ ls bin CWAGENT_VERSION amazon-cloudwatch-agent-config-wizard config-downloader cwagent-otel-collector amazon-cloudwatch-agent amazon-cloudwatch-agent-ctl config-translator start-amazon-cloudwatch-agent ``` I try to start the OTel Collector using `sudo bin/amazon-cloudwatch-agent-ctl -a start -m ec2 -o default`. It doesn't start, though. Here's the process status: ``` $ sudo bin/amazon-cloudwatch-agent-ctl -a status { "status": "running", "starttime": "2022-06-09T07:12:30+0000", "configstatus": "configured", "cwoc_status": "stopped", "cwoc_starttime": "", "cwoc_configstatus": "configured", "version": "1.247350.0" } ``` And the log: ``` $ tail cwagent-otel-collector/logs/cwagent-otel-collector.log 2022/06/09 07:09:20 E! User aoc does not exist: no matching entries in passwd file 2022/06/09 07:09:20 E! Failed to ChangeUser: no matching entries in passwd file 2022/06/09 07:09:20 no matching entries in passwd file ``` Has anyone gotten the OTel Collector working with Elastic Beanstalk? In tips for how to configure this? -martin
1
answers
0
votes
18
views
asked 20 days ago

60 Second Timeout on Elastic Beanstalk

I have a single-instance (NO load balancer) Docker container (NO proxy server) that times out at exactly sixty seconds no matter what I do. Yes, I'm aware of the many seemingly "duplicate" questions. I've been trying to solve this problem for 40+ hours. I've seen them all. Every single answer to these questions informs the user that they must change the settings of NGINX or the load balancer. However, I have NEITHER NGINX or a load balancer for the environment, yet it still times out. I am mostly convinced that this is an AWS bug. I have an endpoint titled `time_test` for the mini server I created. When I make a POST request to the endpoint, I get a timeout at exactly 60 seconds (the request throws an exception on my end). Here's the Python code to make the request. import requests url = f"http://...us-east-1.elasticbeanstalk.com/" time_to_sleep = 65 url += f"time_test?time_to_sleep={time_to_sleep}" response = requests.post(url=url, timeout=10000) This throws an `HTTPSException` error, indicating that the server terminated the response, always at exactly 60 seconds. However, the logs show a successful response. My logs (specifically, "eb-docker/containers/eb-current-app/eb-blahblah-stdouterr.log) shows `[01/Jun/2022 22:05:49] "POST /time_test?time_to_sleep=65 HTTP/1.1" 200 -` Note the `200` successful status code. I'm going to continue to find an answer to this problem, which seemingly has none, and will report back if so. Any help with how to change the environment to accept >60 second requests would be greatly appreciated. Please don't reply, "You should have shorter request times." Not helpful or applicable. (Platform = Docker running on 64bit Amazon Linux 2/3.4.10)
1
answers
0
votes
33
views
asked a month ago

Amazon Linux 2 on Beanstalk isn't installing SQSD and prevents cron.yml from working

We're on solution stack "64bit Amazon Linux 2 v3.3.13 running PHP 7.4" the worker server is spinning up, unpacking the "platform-engine.zip" and when it comes to setting up SQSD: ``` May 23 12:45:01 ip-172-31-12-195 su: (to sqsd) root on none May 23 12:45:10 ip-172-31-12-195 aws-sqsd-monitor: restarting aws-sqsd... May 23 12:45:10 ip-172-31-12-195 systemd: Starting (null)... May 23 12:45:10 ip-172-31-12-195 su: (to sqsd) root on none May 23 12:45:10 ip-172-31-12-195 systemd: Created slice User Slice of sqsd. May 23 12:45:10 ip-172-31-12-195 systemd: Started Session c2 of user sqsd. May 23 12:45:10 ip-172-31-12-195 aws-sqsd: Version 2 of the Ruby SDK will enter maintenance mode as of November 20, 2020. To continue receiving service updates and new features, please upgrade to Version 3. More information can be found here: https://aws.amazon.com/blogs/developer/deprecation-schedule-for-aws-sdk-for-ruby-v2/ May 23 12:45:13 ip-172-31-12-195 aws-sqsd: Cannot load config file. No such file or directory: "/etc/aws-sqsd.d/default.yaml" - (AWS::EB::SQSD::FatalError) May 23 12:45:13 ip-172-31-12-195 systemd: aws-sqsd.service: control process exited, code=exited status=1 May 23 12:45:13 ip-172-31-12-195 systemd: Failed to start (null). May 23 12:45:13 ip-172-31-12-195 systemd: Unit aws-sqsd.service entered failed state. May 23 12:45:13 ip-172-31-12-195 systemd: aws-sqsd.service failed. May 23 12:45:13 ip-172-31-12-195 systemd: Removed slice User Slice of sqsd. ``` I can't find anything online about this, so some help would be greatly appreciated.
1
answers
0
votes
39
views
asked a month ago

Elastic beanstalk database migrations no longer running

I am hosting a django site via elastic beanstalk and I have a 01_migrate.sh file in .platform/hooks/postdeploy in order to migrate model changes to a postgres database on Amazon RDS: ``` #!/bin/sh source /var/app/venv/staging-LQM1lest/bin/activate python /var/app/current/manage.py migrate --noinput python /var/app/current/manage.py createsu python /var/app/current/manage.py collectstatic --noinput ``` This used to work well but now when I check the hooks log, although it appears to find the file there is no output to suggest that the migrate command has been ran i.e. previously I would get the following even if no new migrations: ``` 2022/03/29 05:12:56.530728 [INFO] Running command .platform/hooks/postdeploy/01_migrate.sh 2022/03/29 05:13:11.872676 [INFO] Operations to perform: Apply all migrations: account, admin, auth, blog, contenttypes, home, se_balance, sessions, sites, socialaccount, taggit, users, wagtailadmin, wagtailcore, wagtaildocs, wagtailembeds, wagtailforms, wagtailimages, wagtailredirects, wagtailsearch, wagtailusers Running migrations: No migrations to apply. Found another file with the destination path 'favicon.ico'. It will be ignored since only the first encountered file is collected. If this is not what you want, make sure every static file has a unique path. ``` Whereas now I just get: ```2022/05/23 08:47:49.602719 [INFO] Running command .platform/hooks/postdeploy/01_migrate.sh Found another file with the destination path 'favicon.ico'. It will be ignored since only the first encountered file is collected. If this is not what you want, make sure every static file has a unique path.``` I dont know what has occurred to make this change. Of potential relevance is that eb deploy stopped being ablke to find the 01_migrate.sh file so I had to move the folder and its contents .platform/hooks/postdeploy/01_migrate.sh up a to the parent directory and then it became able to find it again.
0
answers
0
votes
13
views
asked a month ago

How to Configure stickiness and autoscaling in elasticbeanstalk application.

Hello, We have a application running on elasticbeanstalk that listens for client request and returns a stream segment. We have some requirements for application: 1) Client session should be sticky (all request for some session should go to same EC2) for specified time without any changes on client side. (we can't add cookie sending via client). As per my understanding application load balancer supports that and i enabled stickiness in load balancer. As per my understanding load balancer generated cookie are managed by load balancer and we do not need to send cookie through client side. 2) Based on CPU utilisation we need to auto scale instances, (when CPU load > 80%) we need to scale instances +1. Problem:- 1) When i request from multiple clients from same IP address. CPU load goes above 80% and new instance is launched. But after sometime i see CPU load going down . does this mean that 1 of these client are now connected to new instance and load is shared. That means stickiness is not working. Though It is not clear how to test it properly. However sometimes when i tried to stop new instance manually . No client has got any errors. When I stop first instance all client gets 404 error for sometime. How to check whether stickiness is working properly ? 2) If i get stickiness to work. As per my understanding Load will not be shared by new instance. So Average CPU usage will be same. So autoscaling will keep on launching new instance until max limit. How do i set stickiness with autoscaling feature. I set stickiness seconds to 86400 sec (24 hours) for safe side. Can someone please guide me how to configure stickiness and autoscaling proper way ?
3
answers
0
votes
34
views
asked a month ago

Does AWS have no equivalent of GCP cloud run for scale to 0 huge savings in staging/preproduction?

NOTE: I am looking to stay in AWS but in comparing bills to my previous successful startup(using GCP), I am seeing massive differences in cost due to cloud run going to 0 instances and amazon staying on. Duplicating production to preproduction and staging can be expensive and using cloud run at our previous company(15 microservices), our preproduction and staging environments would frequently just scale to 0 and we would pay ZERO over weekends over all the nights, etc. I am looking at our current bill and this company is identical in all the technologies and identical in usage(I mean it could not be more apples to apples which is sort of crazy to me). I dug deeper and none of our systems are scaling back to 0. (much like AWS lambdas does). Is there no solution in AWS for this? What I tried: Elastic Beanstalk defaulted min=1 instance max=5 instances so I changed it to min=0. It successfully scaled back to 0 and then when I sent web request in, I got a 503 :( instead of request taking a while and spinning a server back up. AWS EKS, we tried setting to 0 and in that case it went down to 0 and then a web request just timed out instead of waiting for an instance to spin up and serve the request The ideal situation: Servers instances spin down to 0 when not in use When a request comes in, an instance spins up and serves that request(first request takes longer, yes) and then the 2nd request is lightning fast. Does AWS have no cloud run type of system? I know they have lambdas but having APIs with 5-10 methods/lambdas is a much easier way to architect a system and leads to better reviews in our monorepo on those contracts between systems.
2
answers
0
votes
29
views
asked a month ago

How / Where do I upgrade my instance's PostgreSQL Version?

Hello! I am trying to deploy a rails 7 app to elastic beanstalk, but my deploy keeps failing. In the logs I see: `An error occurred while installing pg (1.3.5), and Bundler cannot continue` (installing the postgres gem is failing) I SSHed onto my instance and ran bundle manually, and see `Your PostgreSQL is too old. Either install an older version of this gem or upgrade your database to at least PostgreSQL-9.3.` In my .elasticbeanstalk.packages.config file I have: ``` packages: yum: postgresql14-devel: [] ``` But this seems to have no effect on the version of Postgres on my instance. Creating an RDS instance associated to my beanstalk environment with any version of PostgreSQL does not seem to solve the problem. `postgres -V` is not a command on my beanstalk instance if I ssh onto my instance and cat `usr/bin/pg_config` I think it may be set to version 9.2, but this file doesn't look to me like something I should be editing via SSH and I don't see any references to manipulating how it gets generated. Any assistance would be greatly appreciated! Update1: .elasticbeanstalk.packages.config should be within `.ebextensions` instead. I made this change, still had this error. Found another thread on stack overflow that described the packages.config file to look like ``` packages: yum: amazon-linux-extras: [] commands: 01_postgres_activate: command: sudo amazon-linux-extras enable postgresql10 02_postgres_install: command: sudo yum install -y postgresql-devel ``` I updated the file, still had no luck. I terminated and rebuild my app to try again, and manually ran `sudo amazon-linux-extras enable postgresql10` and `sudo yum install -y postgresql-devel` via ssh, which finally let me successfully bundle install. Still working on making this work via EB deploy instead of manually messing with the boxes. Update 2: After making the above changes and creating a new application + environment I am able to consistently get past the above issue.
1
answers
1
votes
11
views
asked 2 months ago

Elastic Beanstalk drops connections to load balancer during deployment after Amazon Linux 2 upgrade

Hi. I just upgraded my Elastic Beanstalk environment from "PHP 7.3 running on 64bit Amazon Linux/2.9.28" to "PHP 7.3 running on 64bit Amazon Linux 2/3.3.12" (using Apache) and found that my application load balancer returns 502 errors for a few seconds every single time I deploy a new application version. I checked the ALB logs and found the 502 responses have a target_processing_time of -1 which the [documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html) says means that "the load balancer can't dispatch the request to a target. This can happen if the target closes the connection before the idle timeout or if the client sends a malformed request." I ran a test where I sent one request per second to both the ALB and directly to the EC2 instance during deployment. I found that during those 2-3 seconds where the application version is switching over, the requests to the ALB return 502 while the requests directly to the instance just have a brief pause but respond correctly anyway. Everything I've been able to find on this topic says it's related to KeepAlive timeouts and needing different timeouts on the server and ALB but I've tried every combination of timeouts I can imagine and no luck. I've tried a few different environment platforms and it happens 100% of the time on the Amazon Linux 2 platforms and 0% of the time on Amazon Linux 1 platforms. Thanks!
0
answers
0
votes
6
views
asked 2 months ago

Linux OS networking bug in Elastic Beanstalk AMI with Tomcat & Corretto

We use AWS Elastic Beanstalk with an Amazon AMI with Tomcat & Corretto running on Amazon Linux 2 (`aws-elasticbeanstalk-amzn-2.0.20220316.64bit-eb_tomcat85corretto8_amazon_linux_2-hvm-2022-03-29T20-48`) and are running into an [OS networking bug](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1924298) when Tomcat is under load. The result of this bug are that TCP connections from clients connect but timeout while the server is under load. The networking bug is due to a race condition in the TCP stack which is fixed in Linux 5.10 kernels. A description and diff of the bug can be found in [this commit](https://github.com/torvalds/linux/commit/01770a166165738a6e05c3d911fb4609cc4eb416). From the description of this bug it looks like this race condition affects all TCP networking and is not specific to Tomcat, but manifests more often under load. Currently, as far as I can tell, all the latest Amazon AMIs for Elastic Beanstalk for Tomcat or Corretto are using a 4.14 kernel. The AMI which we are using has a kernel of `4.14.268-205.500.amzn2.x86_64`. I have been able to reproduce the bug on this AMI using the sample server code in the Ubuntu bug report, which is independent of Tomcat. I have also tried reproing the bug on newer versions of Amazon Linux 2 (AMI `amzn2-ami-kernel-5.10-hvm-2.0.20220419.0-x86_64-gp2`) which are using a `5.10.109-104.500.amzn2.x86_64` kernel, but have not been able to repro the bug on this kernel. We would prefer not to have to create our own AMI for using Elastic Beanstalk, but were wondering if and when there will be an update to the Amazon Elastic Beanstalk AMI's which incorporate this OS bug fix since this is affecting the reliability of networking under load?
0
answers
2
votes
11
views
asked 2 months ago

Elastic Beanstalk error after migrating from python 3.7 to python3.8.

I am using the EB platform: Python 3.7 AL2 version 3.3.11, and all is working fine. But, the troubleshooting comes when I try to upgrade both to a newer AL2 version (3.3.12) and to a newer Python version (3.8). The error happens on the "[app-deploy] - [StageApplication]": eb-engine: ``` 022/04/20 11:51:08.170326 [INFO] Executing instruction: StageApplication 2022/04/20 11:51:08.171215 [INFO] extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/ 2022/04/20 11:51:08.171232 [INFO] Running command /bin/sh -c /usr/bin/unzip -q -o /opt/elasticbeanstalk/deployment/app_source_bundle -d /var/app/staging/ 2022/04/20 11:51:08.208452 [INFO] finished extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/ successfully 2022/04/20 11:51:08.212657 [ERROR] An error occurred during execution of command [app-deploy] - [StageApplication]. Stop running the command. Error: chown /var/app/staging/venv/dev/lib/python3.8/collections: no such file or directory ``` The error specifically says: **[ERROR] An error occurred during execution of command [app-deploy] - [StageApplication]. Stop running the command. Error: chown /var/app/staging/venv/dev/lib/python3.8/collections: no such file or directory** I don't know why it goes there, instead of /var/app/venv/staging-LQM1lest/lib/python3.8 that is where I think it should go and where it is the virtual environment. I can assure that the local /venv folder is ignored when deploying (following this related question https://stackoverflow.com/questions/61805345/aws-elastic-beanstalk-chown-pythonpath-error). Said that though, after connecting to the instance by ssh, I can see a venv folder in /var/app/staging/ containing a pyton3.8 sub folder. I am curious to know why this happens and what can I be doing wrong in the process Thanks in advance!
1
answers
0
votes
139
views
asked 2 months ago

Beanstalk deployment process fails to update Proxy configuration

I've seen this several times where it appears that Beanstalk will not deploy a bundle because artifacts contained in a previous deployment had syntax errors. I don't understand why Beanstalk would validate old artifacts rather than deploy the artifact as contained in the bundle currently being deployed. In order to resolve resolve the issue the syntactically bad files have to be manually replaced (or deleted?) before Beanstalk will deploy successfully. The current problem is highlighted below: eb-engine log 3:57 2022/04/07 22:12:51.580831 [INFO] Executing instruction: configure proxy server 2022/04/07 22:12:51.584473 [INFO] Running command /bin/sh -c cp -rp /var/app/staging/.platform/httpd/. /var/proxy/staging/httpd This is done correctly – contains the version of the file from the app bundle I’m deploying … 2022/04/07 22:13:31.619080 [INFO] Executing instruction: start proxy with new configuration 2022/04/07 22:13:31.619124 [INFO] Running command /bin/sh -c /usr/sbin/apachectl -t -f /var/proxy/staging/httpd/conf/httpd.conf 2022/04/07 22:13:31.687560 [INFO] [Thu Apr 07 22:13:31.670790 2022] [core:warn] [pid 21978:tid 139968169546304] AH00111: Config variable ${njpf_host} is not defined AH00526: Syntax error on line 1 of /etc/httpd/conf.d/apache_overides.conf: Cannot parse condition clause: syntax error, unexpected T_OP_BINARY Why is he looking at /etc/httpd/conf.d/apache_overides.conf when the latest is in /var/proxy/staging/httpd? (or why wasn’t the latest apache_overides.conf copied from /var/proxy/staging/httpd to /etc/httpd/conf.d?) 2022/04/07 22:13:31.687593 [ERROR] An error occurred during execution of command [app-deploy] - [start proxy with new configuration]. Stop running the command. Error: copy proxy conf from staging failed with error validate httpd configuration failed with error Command /bin/sh -c /usr/sbin/apachectl -t -f /var/proxy/staging/httpd/conf/httpd.conf failed with error exit status 1. Stderr:[Thu Apr 07 22:13:31.670790 2022] [core:warn] [pid 21978:tid 139968169546304] AH00111: Config variable ${njpf_host} is not defined AH00526: Syntax error on line 1 of /etc/httpd/conf.d/apache_overides.conf: Cannot parse condition clause: syntax error, unexpected T_OP_BINARY This seems to confirm the copy from /var/proxy/staging/httpd failed but the reason it failed is that the last version (prior to the current deploy) of /etc/httpd/conf.d/apache_overides.conf had a syntax error. Well no kidding – that’s why I’m deploying the latest version of apache_overides.conf (in /var/proxy/staging/httpd/conf.d) which does not have the reported syntax error). Again, I'm not sure why beanstalk is concerned about a syntax error that doesn't exist in the current deployment. Help! Thanks.
0
answers
0
votes
3
views
asked 3 months ago

Setting JVM options on Beanstalk Tomcat

Trying to set options as follows: -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=5004,suspend=n Using both "Container Options > JVM options" and as a Environment Property as follows: Name: _JAVA_OPTIONS Value: -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=5004,suspend=n Neither approach seemed to work. Tomcat starts as follows: 12-Apr-2022 02:24:46.838 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name: Apache Tomcat/8.5.75 12-Apr-2022 02:24:46.844 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Feb 28 2022 18:26:53 UTC 12-Apr-2022 02:24:46.848 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 8.5.75.0 12-Apr-2022 02:24:46.850 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux 12-Apr-2022 02:24:46.851 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 4.14.268-205.500.amzn2.x86_64 12-Apr-2022 02:24:46.851 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64 12-Apr-2022 02:24:46.852 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/lib/jvm/java-1.8.0-amazon-corretto.x86_64/jre 12-Apr-2022 02:24:46.853 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 1.8.0_322-b06 12-Apr-2022 02:24:46.854 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Amazon.com Inc. 12-Apr-2022 02:24:46.854 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /usr/share/tomcat 12-Apr-2022 02:24:46.855 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /usr/share/tomcat 12-Apr-2022 02:24:46.858 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent The Apache Tomcat Native library which allows using OpenSSL was not found on the java.library.path: [/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib] 12-Apr-2022 02:24:47.016 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"] 12-Apr-2022 02:24:47.045 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read 12-Apr-2022 02:24:47.074 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 1048 ms 12-Apr-2022 02:24:47.165 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina] 12-Apr-2022 02:24:47.173 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/8.5.75] 12-Apr-2022 02:24:47.242 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [/var/lib/tomcat/webapps/jcs.war] 12-Apr-2022 02:24:56.187 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/var/lib/tomcat/webapps/jcs.war] has finished in [8,943] ms 12-Apr-2022 02:24:56.192 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [/var/lib/tomcat/webapps/work.war] 12-Apr-2022 02:25:35.849 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/var/lib/tomcat/webapps/work.war] has finished in [39,657] ms 12-Apr-2022 02:25:35.863 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"] 12-Apr-2022 02:25:35.875 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 48794 ms How are JVM options supposed to be set in this environment? Thanks much.
2
answers
0
votes
52
views
asked 3 months ago

CFM stack is stuck after failure deleting Beanstalk app

Hi, I have a CloudFormation stack which includes a Beanstalk app along with an environment. I tried to remove this whole Beanstalk app within the CFM stack by simply removing the resources within the template. However, because temination protection was enabled on the nested stack that was created for the Beanstalk app, the deletion failed. The following error appeared within the Beanstalk console. ERROR Service:AmazonCloudFormation, Message:Stack [awseb-e-xxxxxxxxxx-stack] cannot be deleted while TerminationProtection is enabled I fixed it by disabling termination protection on the nested stack, but the problem is that the main/root/parent stack is stuck in the UPDATE_COMPLETE_CLEANUP_IN_PROGRESS state. At no point has CFM/Beanstalk tried to delete the nested stack again since the initial failure. Essentially the stack update has been stuck ever since and has not recovered into a useful state, effectively taking our whole stack as hostage. So basically: Beanstalk failed to delete its nested stack due to a misconfiguration, causing the root stack to be indefinitely stuck. Apparently there is no retry logic happening, since CFM/Beanstalk only attempted the operation once (I fixed the underlying problem almost immediately). It seems like Beanstalk doesn't properly communicate to CFM that the operation failed, causing the stack update to be left hanging. How do I get out of this state? It's been so long now that I doubt CFM is able to handle it automatically. Is it safe to manually delete the nested Beanstalk stack? I just don't want to mess anything up if I do things outside of CFM's control. Thanks!
1
answers
0
votes
7
views
asked 3 months ago

AWS Elastic Beanstalk - Ruby 3.0 running on 64bit Amazon Linux 2/3.4.4 - 100.0 % of the requests are failing with HTTP 5xx

Hello, Another amazing day today for all of us :) I am trying to use the AWS EB Ruby 3.0 running on 64bit Amazon Linux 2/3.4.4 with a Ruby-on-Rails v6.0.4.4 app, but until now, I did not manage to make it work In env status I get: ``` 100.0 % of the requests are failing with HTTP 5xx ``` also in /var/log/nginx/error.log ``` [error] 2459#2459: *596 connect() to unix:///var/run/puma/my_app.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 172.31.34.113, server: _, request: "POST / HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/", host: "52.29.66.93" ``` and in /var/log/puma/puma.log ``` [4898] ! Unable to start worker [4898] /opt/rubies/ruby-3.0.3/lib/ruby/site_ruby/3.0.0/bundler/runtime.rb:309:in `check_for_activated_spec!' [4898] Early termination of worker ``` The `pumactl` it looks ok ``` [ec2-user@ip-172-31-xx-xx ~]$ pumactl -V 5.6.2 ``` If I check the processes: ``` ps aux | grep puma healthd 25925 0.0 3.6 828800 36624 ? Ssl 09:39 0:15 puma 5.3.2 (tcp://127.0.0.1:22221) [healthd] webapp 26497 0.2 2.2 255768 22912 ? Ss 09:40 1:07 puma 5.6.2 (unix:///var/run/puma/my_app.sock) [current] webapp 28653 64.0 2.1 327180 21668 ? Rl 16:08 0:00 puma: cluster worker 0: 26497 [current] ec2-user 28656 0.0 0.0 119420 924 pts/0 S+ 16:08 0:00 grep --color=auto puma ``` So the puma is running... correct? Also, there is another puma v5.3.2 Maybe this other puma version is used for another reason (health service)? In the Rails app I have the following: .ebextensions/02_yarn.config ``` commands: 01_node_get: cwd: /tmp command: 'curl --silent --location https://rpm.nodesource.com/setup_14.x | sudo bash -' 02_node_install: cwd: /tmp command: 'yum -y install nodejs' 03_yarn_get: cwd: /tmp # don't run the command if yarn is already installed (file /usr/bin/yarn exists) test: '[ ! -f /usr/bin/yarn ] && echo "yarn not installed"' command: 'sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo' 04_yarn_install: cwd: /tmp test: '[ ! -f /usr/bin/yarn ] && echo "yarn not installed"' command: 'sudo yum -y install yarn' 05_mkdir_webapp_dir: command: "mkdir /home/webapp" ignoreErrors: true 06_chown_webapp_dir: command: "chown webapp:webapp /home/webapp" ignoreErrors: true 07_chmod_webapp_dir: command: "chmod 0744 /home/webapp" ignoreErrors: true 08_chmod_logs: command: "chown webapp:webapp -R /var/app/current/log/" ignoreErrors: true 09_create_log_file: command: "touch /var/app/current/log/production.log" ignoreErrors: true 10_chown_log_production: command: "chown webapp:webapp /var/app/current/log/production.log" ignoreErrors: true 11_chmod_log_dir: command: "chmod 0664 -R /var/app/current/log/" ignoreErrors: true 12_update_bundler: command: "gem update bundler" ignoreErrors: true 13_chown_current: command: "chown webapp:webapp -R /var/app/current/" ignoreErrors: true 14_chmod_current: command: "chmod 0755 -R /var/app/current/" ignoreErrors: true 15_chown_current: command: "chown webapp:webapp -R /var/app/ondeck/" ignoreErrors: true 16_chown_current: command: "chmod 0644 -R /var/app/ondeck/" ignoreErrors: true container_commands: 17_install_webpack: command: "npm install --save-dev webpack" 18_precompile: command: "bundle exec rake assets:precompile" ``` .ebextensions/03_nginx.config ``` files: "/etc/nginx/conf.d/02_app_server.conf": mode: "000644" owner: root group: root content: | # The content of this file is based on the content of /etc/nginx/conf.d/webapp_healthd.conf # Change the name of the upstream because it can't have the same name # as the one defined by default in /etc/nginx/conf.d/webapp_healthd.conf upstream new_upstream_name { server unix:///var/run/puma/my_app.sock; } # Change the name of the log_format because it can't have the same name # as the one defined by default in /etc/nginx/conf.d/webapp_healthd.conf log_format new_log_name_healthd '$msec"$uri"' '$status"$request_time"$upstream_response_time"' '$http_x_forwarded_for'; server { listen 80; server_name _ localhost; # need to listen to localhost for worker tier if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") { set $year $1; set $month $2; set $day $3; set $hour $4; } access_log /var/log/nginx/access.log main; # Match the name of log_format directive which is defined above access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour new_log_name_healthd; location / { # Match the name of upstream directive which is defined above proxy_pass http://new_upstream_name; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /assets { alias /var/app/current/public/assets; gzip_static on; gzip on; expires max; add_header Cache-Control public; } location /public { alias /var/app/current/public; gzip_static on; gzip on; expires max; add_header Cache-Control public; } location /packs { alias /var/app/current/public/packs; gzip_static on; gzip on; expires max; add_header Cache-Control public; } } container_commands: 01_restart_nginx: command: "sudo service nginx reload" ``` Any ideas why it is not working? Thank you very much in advance for your kind help and support and for your valuable time I wish to all of us all the best and an amazing continuation in our lives...
1
answers
0
votes
31
views
asked 3 months ago

Elastic Beanstalk npm install fails without error message

I am now trying for way to long to deploy a Node.js v16, npm v8 NestJS Api to AWS Elastic Beanstalk with no success. It always stops at the point where npm install is called, which fails without further explanation. The EC2 instance used is a `t4g.small` with AWS Linux. Those are the only information I get from the log files: ``` 2022/04/06 12:17:27.564667 [INFO] Executing instruction: Use NPM to install dependencies 2022/04/06 12:17:27.564707 [INFO] use npm to install dependencies 2022/04/06 12:17:27.564755 [INFO] Running command /bin/sh -c npm config set jobs 1 2022/04/06 12:17:27.918363 [INFO] Running command /bin/sh -c npm --production install 2022/04/06 12:17:41.632070 [ERROR] An error occurred during execution of command [app-deploy] - [Use NPM to install dependencies]. Stop running the command. Error: Command /bin/sh -c npm --production install failed with error signal: killed 2022/04/06 12:17:41.632467 [INFO] Executing cleanup logic 2022/04/06 12:17:41.643564 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment: 'npm' failed to install dependencies that you defined in 'package.json'. For details, see 'eb-engine.log'. The deployment failed.","timestamp":1649247461630,"severity":"ERROR"},{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1649247461633,"severity":"ERROR"}]}]} 2022/04/06 12:17:41.650817 [INFO] Platform Engine finished execution on command: app-deploy ``` The interesting part is that when I run everything in AWS CodeBuild it works flawlessly, but including node_modules the artifact is too big to upload to Elastic Beanstalks as the maximum file size seems to be 500 MB. Does anyone know what the problem could be?
0
answers
0
votes
20
views
asked 3 months ago

Elastic beanstalk Enhanced health not generating healthd/application.log files

I have Enhanced health reporting turned on for my Elastic beanstalk environment. The environment is: 1. Multicontainer docker setup running in “Amazon Linux 2” 2. It has an nginx proxy (Configuration > Software shows: Log streaming: disabled / Proxy server: nginx / Rotate logs: disabled / X-Ray daemon: disabled) 3. Enhanced monitoring is on (Configuration > Monitoring shows: CloudWatch Custom Metrics-Environment: CloudWatch Custom Metrics-Instance: / Health event log streaming: disabled / Ignore HTTP 4xx: enabled / Ignore load balancer 4xx: disabled System: Enhanced) However, on the Health page, none of the requests, response, or latency fields are populating, while load & CPU utilization are populating. It is my understanding that this data is populated from a log file that is written to `/var/log/nginx/healthd/`, but that directory is empty. It seems like this is a bug or some sort of misconfiguration. Does anyone know why this might be happening? I included some relevant info from the machine below. --- The healthd config file (I commented out the `group_id`, which is a uuid in the actual file): ``` $ cat /etc/healthd/config.yaml group_id: XXXX log_to_file: true endpoint: https://elasticbeanstalk-health.us-east-2.amazonaws.com appstat_log_path: /var/log/nginx/healthd/application.log appstat_unit: sec appstat_timestamp_on: completion ``` The output of the healthd daemon log—showing warnings for not finding previous application.log.YYYY-MM-DD-HH files: ``` $ head /var/log/healthd/daemon.log # Logfile created on 2022-04-02 21:02:22 +0000 by logger.rb/66358 A, [2022-04-02T21:02:24.123304 #4122] ANY -- : healthd daemon 1.0.6 initialized W, [2022-04-02T21:02:24.266469 #4122] WARN -- : log file "/var/log/nginx/healthd/application.log.2022-04-02-21" does not exist W, [2022-04-02T21:02:29.266806 #4122] WARN -- : log file "/var/log/nginx/healthd/application.log.2022-04-02-21" does not exist W, [2022-04-02T21:02:34.404332 #4122] WARN -- : log file "/var/log/nginx/healthd/application.log.2022-04-02-21" does not exist W, [2022-04-02T21:02:39.406846 #4122] WARN -- : log file "/var/log/nginx/healthd/application.log.2022-04-02-21" does not exist W, [2022-04-02T21:02:44.410108 #4122] WARN -- : log file "/var/log/nginx/healthd/application.log.2022-04-02-21" does not exist W, [2022-04-02T21:02:49.410342 #4122] WARN -- : log file "/var/log/nginx/healthd/application.log.2022-04-02-21" does not exist W, [2022-04-02T21:02:54.410611 #4122] WARN -- : log file "/var/log/nginx/healthd/application.log.2022-04-02-21" does not exist W, [2022-04-02T21:02:59.410860 #4122] WARN -- : log file "/var/log/nginx/healthd/application.log.2022-04-02-21" does not exist ``` The /var/logs/nginx/ directory with perms and ownership. Is `nginx` supposed to own healthd? ``` $ ls -l /var/log/nginx/ total 12 -rw-r--r-- 1 root root 11493 Apr 4 21:15 access.log drwxr-xr-x 2 nginx nginx 6 Apr 2 21:01 healthd drwxr-xr-x 2 root root 6 Apr 2 21:02 rotated ``` The empty /var/logs/nginx/healthd/ directory: ``` $ ls /var/log/nginx/healthd/ # this directory is empty ```
1
answers
3
votes
57
views
asked 3 months ago

Can I freely configure AWSEBSecurityGroups created by ElasticBeanstalk in ebxtensions?

The following "01-security-group.config" was create under the .ebxtensions directory. I then ran eb create using [PHP sample application (php.zip)](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/tutorials.html). The VPC is a custom VPC, not a default VPC. EC2 and ELB are located on public subnets. KeyPair also sets. ``` Resources: AWSEBSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: EC2 SecurityGroup for ElasticBeanstalk environment. SecurityGroupIngress: - ToPort: 80 FromPort: 80 IpProtocol: tcp SourceSecurityGroupId: { "Fn::GetAtt" : [ "AWSEBLoadBalancerSecurityGroup", "GroupId" ]} - ToPort: 22 FromPort: 22 IpProtocol: tcp CidrIp: xx.xx.xx.xx/32 ``` The expectation is that the AWSEBSecurityGroup description field and inbound rules will be as specified. However, the results are as follows, with a different description and an unnecessary rule (SSH, 0.0.0.0/0). ID:sg-058b4d99a88ea5c75 Description: VPC Security Group Inbound Rule | Type | Protocol | Port | Source | | --- | --- | --- | --- | | SSH | TCP | 22 | 0.0.0.0/0 | | HTTP | TCP | 80 | awseb-e-kbmrvrb9qk-stack-AWSEBLoadBalancerSecurityGroup-DXLN25QVL0F9 | | SSH | TCP | 22 | xx.xx.xx.xx/32 | Next, eb deploy was run with the following changes. ``` Resources: AWSEBSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: EC2 SecurityGroup for ElasticBeanstalk environment. SecurityGroupIngress: - ToPort: 80 FromPort: 80 IpProtocol: tcp SourceSecurityGroupId: { "Fn::GetAtt" : [ "AWSEBLoadBalancerSecurityGroup", "GroupId" ]} option_settings: aws:autoscaling:launchconfiguration: SSHSourceRestriction: tcp, 22, 22, xx.xx.xx.xx/32 ``` There are no more unnecessary rules in the security group as shown below. ID: sg-058b4d99a88ea5c75 Description: VPC Security Group Inbound Rule | Type | Protocol | Port | Source | | --- | --- | --- | --- | | HTTP | TCP | 80 | awseb-e-kbmrvrb9qk-stack-AWSEBLoadBalancerSecurityGroup-DXLN25QVL0F9 | | SSH | TCP | 22 | xx.xx.xx.xx/32 | Based on the above, I have two questions. 1. I would like to complete the configuration with just Resources instead of separating it with Resouces and option_seggings, is there a way to do this? 2. Is it possible to change the description field? for your information, AWSEBLoadBalancerSecurityGroup reflects the description field (security group is replaced). Thanks.
1
answers
0
votes
12
views
asked 3 months ago
1
answers
0
votes
319
views
asked 3 months ago

Elastic Beanstalk | .Net with Docker containg custon nginx.conf

Current Setup: Elastic Beanstalk running Docker running on 64bit Amazon Linux 2/3.4.11. I was trying to follow AWS Guidelines for overwrite nginx.conf file located in /etc/nginx/nginx.conf without any success. I have .NET 5 project containing the .platform/nginx/nginx.conf (also trying .ebextenstion). When I'm building my dockerfile and deploying to ECR, adding dockerrun.aws.json to pull the latest image its not taking my custom nginx.conf. nginx.file: ``` user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid/var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 4096; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; include conf.d/*.conf; map $http_upgrade $connection_upgrade { default "upgrade"; } server { listen 80 default_server; gzip on; gzip_comp_level 4; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; access_log /var/log/nginx/access.log main; location / { proxy_pass http://docker; proxy_http_version 1.1; proxy_set_header Connection$connection_upgrade; proxy_set_header Upgrade$http_upgrade; proxy_set_header Host$host; proxy_set_header X-Real-IP$remote_addr; proxy_set_header X-Forwarded-For$proxy_add_x_forwarded_for; } # Include the Elastic Beanstalk generated locations include conf.d/elasticbeanstalk/*.conf; } } ``` I would like to know how can I fix it and replace the nginx default file. Thanks!
1
answers
0
votes
75
views
asked 3 months ago

AWS Elastic Beanstalk Python Application ebextenstions pip install command failed

Hi I am deploy an EB Python Web Application to a private VPC without internet access. I will install python dependency packages offline. I have all the packages ready together with application file. ``` application.py packages/*.whl packages/requirements.txt (CONTENT will be those whl packages name) .ebextensions/python_packages.config ... OTHER_VENV_RELATED_FOLDERS ``` The `ebextensions/python_packages.config` content: ``` commands: use_python_venv: command: source /opt/elasticbeanstalk/deployment/env install_python_requirements: command: pip install --no-index --find-links /var/app/current/packages -r /var/app/current/packages/requirements.txt ``` The application tested working on my local computer and I zipped `application.py`, `ebextensions` and `packages` folder into one zip file and upload and deploy into one single instance with EIP. it gets the below failed message in `/var/log/cfn-init-cmd.log` on the AMI EC2 instance. (python3.8 runtime) ``` 022-03-25 06:59:11,289 P3318 [INFO] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2022-03-25 06:59:11,289 P3318 [INFO] Config prebuild_0_test_xl 2022-03-25 06:59:11,289 P3318 [INFO] ============================================================ 2022-03-25 06:59:11,289 P3318 [INFO] Command install_python_requirements 2022-03-25 06:59:12,675 P3318 [INFO] -----------------------Command Output----------------------- 2022-03-25 06:59:12,675 P3318 [INFO] ERROR: Could not open requirements file: [Errno 2] No such file or directory: '/var/app/current/packages/requirements.txt' 2022-03-25 06:59:12,675 P3318 [INFO] ------------------------------------------------------------ 2022-03-25 06:59:12,675 P3318 [ERROR] Exited with error code 1 ``` I checked the path of the `requirements.txt` file and it is not there and here is the directory and files under that `/var/app` directory: ``` staging/Pipfile venv ``` It means the `/var/app/current` directory not get created so my command failed. Where is my package file during the staging process? How can I install offline python dependency packages on ElasticBeanstalk EC2 instance without internet access? Edit `Pipfile` instead? Thank you
1
answers
0
votes
2
views
asked 3 months ago

AWS Elastic Beanstalk Running in Private VPC without internet access

My objective is to deploy a web application in a VPC **without internet access **and using Elastic Beanstalk as the platform. A single AZ deployment will be sufficient and the load balancer will be "**internal**" facing where we will access it from a windows client in the same subnet. I have created a private subnet in a VPC without internet gateway. Added a bunch of VPC endpoint interface such as `S3, SSM, ElasticBeanstalk, ElasticBeanstalk-health, sqs, cloudformation, logs` etc. Used the default security group for each endpoint. I have created EC2 instance profile with the 2 managed policy [`AWSElasticBeanstalkWebTier` and `AmazonSSMManagedInstanceCore`] and also allows sts:AssumeRole by "EC2" service. This instance profile will be used for the EB environment EC2 intance launch. I have created Elastic Beanstalk service role with the 2 managed policy [`AWSElasticBeanstalkEnhancedHealth` and `AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy`] and also allows sts:AssumeRole by elasticbeanstalk service if sts:ExternalId StringEquals elasticbeanstalk. I have used a simple Nodejs.zip example file from AWS website to test. I created an environment where I have put ELB and EC2 in the same subnet without any public IP address assigned to. Used "loadbalancer" environment with min and max number of instances set to "1" (auto-scaling not needed). EBL set to "internal". Health reporting chose "Enhanced". When the environment get created, it reports error saying that "Instance has not sent any data since launch" and "None of the instances are sending data". I searched online and some answer indicate that NTP UDP port 123 should be allowed in the security group so that the EC2 instance will have a valid time sync and the health reporting will become valid. However my VPC has no internet access and does that mean I have to setup my own NTP server in the VPC and write a bootstrap script in the EC2 instance to change the NTP server from the internet NTP to the intranet NTP? That sounds a lot of work, is the NTP access the real cause for my deployment to be a failure in the private VPC? Thank you.
1
answers
0
votes
246
views
asked 3 months ago

Run (custom) Keycloak 17 Docker Image on AWS Beanstalk

I've been trying to get a Keycloak Docker image to run on a Beanstalk environment for the last week without success. My Dockerfile looks like this: FROM quay.io/keycloak/keycloak:17.0.0 as builder ENV KC_DB=postgres RUN /opt/keycloak/bin/kc.sh build FROM quay.io/keycloak/keycloak:17.0.0 COPY --from=builder /opt/keycloak/lib/quarkus/ /opt/keycloak/lib/quarkus/ WORKDIR /opt/keycloak ENV KC_HTTP_ENABLED=true ENV KC_HOSTNAME_STRICT=false ENV KC_DB_POOL_INITIAL_SIZE=1 ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start-dev"] The Dockerimage runs fine on localhost (`docker run --rm -p 8080:8080 --env-file env.txt my/keycloak`). http://localhost:8080/ shows a start page. The chosen Plattform is a "Docker running on 64bit Amazon Linux 2/3.4.12". I upload the image to Amazon ECR and load it in a Beanstalk instance with the following Dockerrun.aws.json: { "AWSEBDockerrunVersion": "1", "Image": { "Name": "0815.eu-central-1.amazonaws.com/my/keycloak:latest" }, "Ports": [ {"ContainerPort": "8080"} ] } I have saved the necessary environment variables: KC_DB, KC_DB_PASSWORD, KC_DB_POOL_INITIAL_SIZE, KC_DB_SCHEMA, KC_DB_URL, KC_DB_USERNAME, KC_HOSTNAME_STRICT, KC_HTTP_ENABLED, KEYCLOAK_ADMIN, KEYCLOAK_ADMIN_PASSWORD As a load balancer, I set up the Classic Load Balancer with a listener from 8080/HTTP to 8080/HTTP. Now when I try to call the Beanstalk URL (http://Keycloak0815.eu-central-1.elasticbeanstalk.com:8080) I get a 503 error status. A look at the logs shows no abnormalities. Keycloak has started successfully within the docker. What am I doing wrong? What else do I need to configure to get access to the Docker image? I'm grateful for any further information.
0
answers
0
votes
4
views
asked 3 months ago
  • 1
  • 90 / page