AWS Transfer family VPC Endpoint access restriction

0

Hi All,

I have deployed Transfer family Server with Internal VPC endpoint in specific accounts lets say Account A, now we have multiple accounts in AWS, multiple subscriptions in Azure and multiple projects in GCP from where this file transfer server endpoint should be accessible. We do not want file transfer server endpoint to be accessible from internet or other than above CSP accounts.

Currently, we allow all traffic to Transfer Server endpoint but block unnecessary source if they are not listed in WAF Rule. We added custom authentication using API Gateway to File Transfer server and API Gateway is sitting behind WAF rule. In WAF we manage IP addresses of the systems from multiple CSPs. I feel this architecture is not the best approach. Our challenge is we have thousands of systems from where File Transfer Server should be accessible, that's why we have used WAF rule where we can configure upto 10000 IP addresses to block the the access.

What should be the approach in above use case if not WAF rules?

We are trying to use network load balancer and use NACLs if it is helpful.

Thank You

2 Answers
3

Using IP sets in WAF is a valid, no-code way to restrict access by IP to the entire Transfer server. As you mentioned, a single IP set allows for up to 10,000 CIDRs. Furthermore, since a simple IP set lookup only incurs a cost of 1 WCU (WAF capacity unit), the 1,500 WCUs included in the base price of a WAF web ACL could accommodate up to 15 million CIDRs, optionally arranged in a large number of IP sets for ease of maintenance.

From a security and data confidentiality point of view, however, the approach is limited. Once WAF allowed the source IP to make a logon attempt, there'd be no coupling between the client IP address and the specific user who's logging on. That means that to defeat your IP restrictions, exfiltrated credentials from any Transfer user could be used from any of the thousands of IP addresses/ranges permitted by WAF. Also for practical day-to-day purposes, if a given application had separate credentials for production and non-production transfers, they could easily get mixed up, because both credentials could be used equally easily from the production and non-production source IP addresses, rather than the credentials being tied to the appropriate environment's source IPs.

For far superior security, you could consider storing the permitted IP addresses for each user in their user data. If you're using AWS Secrets Manager to store the user data, you could simply add a custom property for the IPs in the user's JSON object. In the Lambda function called either via API Gateway or directly by the Transfer server, you'd check if the sourceIp parameter supplied by the Transfer server matches one of the CIDRs permitted for that user. This feature is also included in the reference solution provided by AWS, which stores the permitted IPs for each user in the ipv4_allow_list attribute: https://docs.aws.amazon.com/transfer/latest/userguide/custom-idp-toolkit.html

In the Transfer server's security group and WAF web ACL, you'd continue to allow connections and logons from either the full internet or broadly from the necessary source CIDRs. However, since the per-user IP allow list validation would be done after the WAF check, in order for a bad actor to gain access to any data, not only would they have to obtain credentials for one of your Transfer users, they'd also have to be connecting from one of the IP addresses allowed for that specific user. Also for legitimate users, if you set their production credentials to be allowed to connect from their production system's IPs and non-production credentials from non-production IPs, they wouldn't be able to connect to production data from non-production IPs by accident, or to upload production data from the production source IPs to the non-production Transfer user's home folder.

EXPERT
answered a month ago
profile picture
EXPERT
reviewed a month ago
  • Hi @Leo Thanks for detail response.

    Currently, we have similar solution running into production. We check source IP from where request is made and match with our list of IP addresses into WAF Rule. There is overhead of maintaining WAF Rule IP set, which we have automated as nightly job to update new IP addresses if there are any. But, currently WAF is in front of API Gateway which is used as custom authorization within Transfer Server so user will be restricted when they try to login however we want to restrict File Transfer endpoint even before user was promoted to user/password. I am trying to below approach, DNS Entry --> (WAF Rule Before Load balancer) Application Load Balancer --> Transfer family Server (Within VPC Private endpoint)

    Does this approach make sense?

    Thank You again, Kiran

  • Hi @Kiran. Unfortunately, that alterative won't work. ALBs are reverse proxies for HTTP and HTTPS only, so they can't handle SSH (for SFTP), FTPS, or FTP traffic. NLBs can relay TCP, UDP, and TLS over TCP traffic, but they don't work with WAF, which is mainly meant for filtering HTTP(S) requests. The only sufficiently scalable option in AWS's service portfolio for controlling access to AWS Transfer endpoints for thousands of source CIDRs would be AWS Network Firewall: https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html

  • In my general own view, one of the major security advantages of AWS Transfer is that it's effectively a fully AWS-managed protocol adapter, rather than a server that'd store data or static credentials. This design makes it highly unlikely to be compromised, but even if it were, there'd be no data, static access keys, SFTP users' passwords, or the like to be found there. Credentials would be in Secrets Manager or DynamoDB, data in S3 or EFS, and only some temporary files stored locally. I'd feel quite confident keeping the Transfer server open to the internet, with per-user IP restrictions.

0

Hi,

Why don't you simply go with prefix lists in the security group of your file transfer access point(s)?

See https://docs.aws.amazon.com/vpc/latest/userguide/managed-prefix-lists.html

Also useful:

Lots of large networks (the AWS network for employees that I use) manage access authorization like your with such prefix lists.

Best,

Didier

profile pictureAWS
EXPERT
answered a month ago
profile picture
EXPERT
reviewed a month ago
  • Prefix lists don't work for thousands of CIDRs. When you reference a prefix list in a security group rule, the rule capacity consumed is multiplied by the capacity of the prefix list. For example, if the prefix list size is set at 10 CIDRs, a single security group rule referencing the list will consume 10 rules' worth of capacity from the security group. The hard maximum for rules in one direction is 1,000, which isn't recommended for performance reasons, so multiple thousands of entries aren't technically even possible.

  • Thanks @Didier for the recommendation.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions