- Más nuevo
- Más votos
- Más comentarios
Your reddit post says that the region you're seeing these problems is us-east-1. Is this happening when you stand up an Ubuntu EC2 in all of the pre-created subnets in the default VPC in that region? What if you create your own VPC and a subnet with an internet gateway in us-east-1, do you see the same behaviour?
You mention that you don't get the problem in us-east-2, so it would be worth double-checking the AMI that's being used. Obviously it will be a different AMI ID to that in us-east-1 (because AMIs are region-specific) but to-all-intents-and-purposes is it the same Ubuntu image you're using in both? It couldn't be a private, customised AMI that's giving you problems in us-east-1, but the latest public AMI direct from Ubuntu in us-east-2 (which is fine)?
Do you have anything setup in us-east-1 but not us-east-2 that might be making changes on-the-fly without your knowledge? I'm thinking of tools like AWS Config or GuardDuty that might have been set up years ago in us-east-1 and been lying dormant, until your new instance and its security groups etc. are in scope of its remediation activities?
If it's not that then could it be doing some kind of automatic update after it boots? Say that takes about seven minutes, and then it reboots so you lose the connection, and then it comes back with updates applied which has done something with (say) the host-based firewall. Even as I type this it sounds daft, but then so does what you're describing (I don't mean that you're describing it wrongly, more like what you're describing is very unusual).
Lastly, do you get the same with an Ubuntu-like AMI such as SUSE, and what about other Linux flavours such as Amazon Linux or RHEL/CentOS ?
If you are still no further forwards after all of this, consider installing CloudWatch agent on the host (if you can get the chance) and see if it points to resource exhaustion anywhere.
In such cases, you should open a support ticket as most likely it is sonethibg on AWS side to be checked/fixed.
Contenido relevante
- OFICIAL DE AWSActualizada hace 8 meses
- OFICIAL DE AWSActualizada hace un año
- OFICIAL DE AWSActualizada hace un año
- OFICIAL DE AWSActualizada hace un año
Currently, the duplicate that most reminds me of this issue is this one: https://repost.aws/questions/QUTwS7cqANQva66REgiaxENA/ec2-instance-rejecting-connections-after-7-minutes#ANcg4r98PFRaOf1aWNdH51Fw
I literally don't know how to get Amazon's customer service attention. I'm not willing to pay money so that the basic service works at all for me... that's just unreasonable.
Doing all the same stuff on us-east-2 works fine. I'm more and more convinced this is an issue on Amazon's end.
I was correct about this being an issue on the backend of AWS. Like others (such as the link in my comment above), I reached out to AWS account and billing, and after a couple of days of communication, they managed to restore connectivity to instances on us-east-1. If you encounter this in the future and you don't want to pay $30, I recommend linking this thread and others in a support ticket from AWS's account and billing which is available for free users. They will kindly be flexible and reach out to the technical support team on your behalf.
Please note that I spent many hours checking all alternatives before messaging billing. Don't just request help because you didn't check if your instance has a public IP, or trying a brand new VPC, and making sure that the VPC has an internet gateway in its routing table, or that the security group allows incoming connections on port 22.