1 Answer
- Newest
- Most votes
- Most comments
2
I think the problem is, when you resolve public RDS instance DNS name from outside of it's VPC, you will get the public address. And this is why your traffic is routed via peering connection but goes out to "internet" via IGW. As results the packet seem to be originating from public IP of EC2 and NATGW, not from private IP of VPC CIDR. Simple solution would be making RDS private, I think then DNS would always resolve to private IP. If that is not possible then it might get more complex. It could be possible (havent testing myself) using alias in private zone, see https://repost.aws/knowledge-center/vpc-peering-troubleshoot-dns-resolution
Relevant content
- asked 2 years ago
- asked a month ago
- Accepted Answerasked 2 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
Ah... At this time I can't make it private as I have an external application that is not hosted on AWS that queries from the RDS. I could look at migrating this application to AWS though that is going to be a fairly large lift. Maybe I can just spin up a new RDS within the same VPC and set it up private and at least test if it indeed works with the peer connection in that configuration.
After reading the post you linked, I found out I had DNS resolution turned off on one side of the peer connection. I enabled that and now it fully resolves and I am able to connect to the RDS in us-east from the EC2 is eu-west. Thanks!