- 新しい順
- 投票が多い順
- コメントが多い順
I wanted to address your specific question around how the traffic is distributed across targets. From the documentation located HERE, we see the following:
...For TCP traffic, the load balancer selects a target using a flow hash algorithm based on the protocol, source IP address, source port, destination IP address, destination port, and TCP sequence number. The TCP connections from a client have different source ports and sequence numbers, and can be routed to different targets. Each individual TCP connection is routed to a single target for the life of the connection.
For UDP traffic, the load balancer selects a target using a flow hash algorithm based on the protocol, source IP address, source port, destination IP address, and destination port. A UDP flow has the same source and destination, so it is consistently routed to a single target throughout its lifetime. Different UDP flows have different source IP addresses and ports, so they can be routed to different targets...
Depending on the timeframe in which you are examining your traffic, you could very well see an imbalanced distribution, based on the number of connections and how they map across the flow hash algorithm described above. Have you tried testing with a larger group of source IPs against your NLB? In my experience, as the number of unique flows are established, this "imbalance" is rectified.
Do you have stickyness enabled on the TG?
Using sticky sessions can lead to an uneven distribution of connections and flows, which might impact the availability of your targets. For example, all clients behind the same NAT device have the same source IP address. Therefore, all traffic from these clients is routed to the same target.
No, it isn't. I ended up restarting the entire service, which broke the long connections, it is more even now.
Yes, I think the key was to restart the services to give it an opportunity to establish new connections