Questions tagged with Networking & Content Delivery
Content language: English
Sort by most recent
Is It Possible to Make an EC2 Instance Part of a VPN Protected by Global Protect
What am I running? * EC2 instance Ubuntu 22.04 with a static elastic ip address * The instance has only one network interface, whose details say it is an Elastic network interface. (I believed every instance has a primary network interface, but I do not see any PNI). What I want to do? My company has an on-prem virtual machine running MSSQL server at 192.168.181.75:1433, but that is behind the globalprotect VPN from Palo Alto Networks. Even when I make a call to that database, I have to connect to global protect manually from my laptop. So my question is, is there any special step I need to take to make the EC2 part of the globalprotect network? I talked to my company network administrator, who want the public IP address of the EC2 instance (which I use for SSH) and the mac address. I got the mac address by entering ``` $ ip addr ``` in the terminal, under the *ens3* interface. But can I assume these two will remain fixed across stopping and restarting the instance? Also, the inbound/outbound rules have to be altered? Some readings led me to believe I have to create an ENI, as the primary network interfaces do not support it. But when I checked the instance details, it seems the only interface present is an ENI.
How to pass connection information between two dependent windows instances during cloudformation?
I am doing a lift and shift with software from an on-premises architecture. There are two servers (main and auxiliary) that have to talk to one another over the network. I currently have tested and confirmed that I can manually add their hostnames and private IP address to the hosts file (`"C:\Windows\System32\drivers\etc\hosts"`) and the software works fine. For those that don't know, this file is used by Windows to map a network hostname like `EC2AM-1A2B3C` to a IP address. So if I added the hostname and IP address of the main server into the hosts file of the auxiliary server, then the auxiliary server could route to the main server. (i.e. `PS> ping EC2AM-1A2B3C` would then work). How could I pass the required information to both servers? They both have to know the other server's private IP address and hostname. If this is not possible at server spin-up time, how might the servers connect and pass this information? I would really like to automate this if possible.
AWS Lightsail Firewall
Hello, I am using AWS Lightsail to host my website. Using Cloudflare DNS + WAF for protection. I am trying to whitelist the Cloudflare IPs on the AWS infra but after defining the ACL, the site becomes unreachable. When i remove the ACL, site is back online. I am making firewall rules for http and https. Am i missing anything? https://www.cloudflare.com/en-gb/ips/ 184.108.40.206/20 220.127.116.11/22 18.104.22.168/22 22.214.171.124/22 126.96.36.199/18 188.8.131.52/18 184.108.40.206/20 220.127.116.11/20 18.104.22.168/22 22.214.171.124/17 126.96.36.199/15 188.8.131.52/13 184.108.40.206/14 220.127.116.11/13 18.104.22.168/22 2400:cb00::/32 2606:4700::/32 2803:f800::/32 2405:b500::/32 2405:8100::/32 2a06:98c0::/29 2c0f:f248::/32
How does EC2 hop to a publicly accessible RDS endpoint?
Hey team, say I have an RDS endpoint that's publicly available. I then access this endpoint from an EC2 instance. What happens at the network layer? Does the request go to the public internet? Ideally, the system would know that the we're inside the same vpc and hop right over. How could I confirm this?
How can I use Cloudfront with a root domain name?
I set up a Cloudfront distribution. I use a non-AWS domain registrar and DNS. I want my distribution to respond to "https://mydomain.com", but there is a problem. Cloudfront provides a domain name and asks you to create a CNAME record in DNS, but you can't create a CNAME record that points to the root domain or "@", like you can with a regular A record. To get around the problem, I set up "www.mydomain.com" as the CNAME record. If I type "https://www.mydomain.com" into my browser it works, but of course "mydomain.com" without "www" does not work. The next thing I did was create a permanent redirect in DNS that should redirect mydomain.com to www.mydomain.com. Now I can type "http://mydomain.com" and it redirects to www.mydomain.com and it works. But if I type "https://mydomain.com" (with HTTPS instead of HTTP) it does not work. I presume that this is because whatever server is implementing the redirect (I use GoDaddy) doesn't have my SSL certificate so the connection can't be made. I'm not sure how to resolve this problem. What I need, I think, is some web server that is on a fixed IP address and also has my SSL certificate, and can simply respond to all requests with the permanent redirect response. The only way I can think of to do this in AWS would be to set up an entire EC2 instance with my own web server, which is a lot of work and cost. Is there a better solution? My company doesn't want to move our DNS or domain registration to AWS, so using something like Route 53 is probably not an option. Thanks, Frank
Route53 redirecting ServiceNow Instance directly to ServiceNow.com
I have a hosted zone set up in AWS route53 (labs.mycompany.com) and a Simple CNAME record inside that hosted zone (servicenow.labs.mycompany.com) and have configured it to redirect to our ServiceNow instance (dev12345.service-now.com) but rather than going to the instance it is redirecting to servicenow.com directly. I did a DiG and the DNS record appears to be accurate and fine. I tried a curl and and I'm guessing the redirect is from http to https which I believe is standard, I get SSL errors when trying http which is to be expected. I can only assume at this point that service now must be doing some kind of domain filtering at the load balancer and the website I'm getting redirected to is just the default target when no patterns match. How do I work around this so I can redirect our URL directly to our servicenow instance? Thanks
Hi guys, Datasync was working just perfectly fine and now it doesn't works, Network connectivity test fails with the following answer "SSL Test failed: no certificate issuer found"
Hi Guys, We were transferring TB with datasync, everything was working just fine but one day I wanted to start a new task and the Agent was offline. This is the first time the agent was in that state. We logged into the agents console and tried the "network connectivity test" and everything went wrong. All the tests failed with the same answer "SSL Test Failed". I am including a picture of that. I would be grateful if you could help me ![Enter image description here](/media/postImages/original/IM-uMrXHvuRNWCQCrvkXgB6w)
Aggregating Transfer Quota across Lightsail Servers
When it is all said and done, I will have many Lightsail servers, each with a minimum of 5 TB of transfer quota. Currently I have a 5,6, and 7 TB plans. That is 18 TB total, but the current hosting plans are for more compute power requirements than transfer needs at this time. However, transfers rates are expected to swell. Underpaying for data transfer on some and hugely overpaying on others seems unsportsmanlike! Can’t all these transfer quotas be aggregated / combined so I don’t get clobbered with overages on the public facing server?
IP Address Assigned to EC2 Instance
Two questions: 1) When an EC2 instance is started, what actually provides the IP addresses that are assigned to it? 2) How can I see the public IP address assigned to an EC2 instance, from within (i.e. after connecting) the EC2 instance? When an EC2 is started, I can see the public IP address that is assigned to it, simply by looking in the Management Console. I can also see the Private IP assigned to that same EC2 instance, once I connect to it, and issue the "ip a s" command - I'm running a RHEL OS. Is there a command that I can run/execute within Linux that will display the public IP address associated with that EC2 instance?
SSH connection from my local terminal to EC2 ubuntu instance is timed out.
Hello, I created an EC2 ubuntu instance and connect that instance through SSH from my local command prompt and connection was successfully. After sometime i close my terminal, reopen my local terminal again and try to connect the same instance but this time it gives timed out error. if i create new instance and connect to it it connects successfully. Mean each instance is connected through ssh connection only once. if i close connection and try to reconnect it gives timed out error. I set the security rule to permit SSH connection from anywhere. Can you please help me in this regard thank you.