By using AWS re:Post, you agree to the Terms of Use
/Application Load Balancer/

Questions tagged with Application Load Balancer

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Launch Announcement - New ALB enhancements provide options to specify how to process Host header and X-Forwarded-For header

We are happy to announce that we just launched two enhancements to define how the Application Load Balancer (ALB) will process *Host* header and *X-Forwarded-For* header. These options provide additional flexibility in handling HTTP/HTTPS requests and allow customers to migrate their workloads to ALB. *Background:* AWS customers had asked for flexibility in specifying how ALB would handle Host and X-Forwarded-For headers in HTTP/HTTP Requests. The enhancements are as follows: *Host Header Enhancement:* * Currently, ALB modifies Host header in the incoming HTTP/HTTPS Request, and appends listener port before sending it to targets. For example, the Host: www.amazon.com header in the HTTP Request is modified to Host: www.amazon.com:8443 before ALB sends it to targets. This will remain the default behavior for backward compatibility. * With this enhancement, when enabled using a new attribute, ALB will send the Host header without any modification to the target. For example, the Host: www.amazon.com header in the HTTP Request will not be modified and sent to target as is. *X-Forwarded-For Header Enhancement:* * Currently, ALB appends IP address of the previous hop to the X-Forwarded-For header before forwarding it to targets. This will remain the default behavior for backward compatibility. * With this enhancement, customers can now specify whether the ALB should preserve or delete the X-Forwarded-For header before sending it to the targets. *Launch Details:* * Both enhancements do not change the default behavior and existing ALBs are not affected. * The enhancements are available using API and AWS Console. * The enhancements are available in all commercial, GovCloud, and China regions. These will be deployed in ADC regions at a later date based on demand. *Launch Materials:* * Documentation for Host header enhancement - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html#host-header-preservation * Documentation for X-Forwarded-For header enhancement - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/x-forwarded-headers.html#x-forwarded-for Please give these a try and also let the customers know. Thank you.
0
answers
1
votes
30
views
asked 4 hours ago

How to Configure stickiness and autoscaling in elasticbeanstalk application.

Hello, We have a application running on elasticbeanstalk that listens for client request and returns a stream segment. We have some requirements for application: 1) Client session should be sticky (all request for some session should go to same EC2) for specified time without any changes on client side. (we can't add cookie sending via client). As per my understanding application load balancer supports that and i enabled stickiness in load balancer. As per my understanding load balancer generated cookie are managed by load balancer and we do not need to send cookie through client side. 2) Based on CPU utilisation we need to auto scale instances, (when CPU load > 80%) we need to scale instances +1. Problem:- 1) When i request from multiple clients from same IP address. CPU load goes above 80% and new instance is launched. But after sometime i see CPU load going down . does this mean that 1 of these client are now connected to new instance and load is shared. That means stickiness is not working. Though It is not clear how to test it properly. However sometimes when i tried to stop new instance manually . No client has got any errors. When I stop first instance all client gets 404 error for sometime. How to check whether stickiness is working properly ? 2) If i get stickiness to work. As per my understanding Load will not be shared by new instance. So Average CPU usage will be same. So autoscaling will keep on launching new instance until max limit. How do i set stickiness with autoscaling feature. I set stickiness seconds to 86400 sec (24 hours) for safe side. Can someone please guide me how to configure stickiness and autoscaling proper way ?
3
answers
0
votes
34
views
asked a month ago

ECS EC2 Instance is not register to target group

I create a ECS service using EC2 instances, then i create an Application Load Balancer and a target group, my docker image the task definition its using follow configuration: ```json { "ipcMode": null, "executionRoleArn": null, "containerDefinitions": [ { "dnsSearchDomains": null, "environmentFiles": null, "logConfiguration": { "logDriver": "awslogs", "secretOptions": null, "options": { "awslogs-group": "/ecs/onestapp-task-prod", "awslogs-region": "us-east-2", "awslogs-stream-prefix": "ecs" } }, "entryPoint": null, "portMappings": [ { "hostPort": 0, "protocol": "tcp", "containerPort": 80 } ], "cpu": 0, "resourceRequirements": null, "ulimits": null, "dnsServers": null, "mountPoints": [], "workingDirectory": null, "secrets": null, "dockerSecurityOptions": null, "memory": null, "memoryReservation": 512, "volumesFrom": [], "stopTimeout": null, "image": "637960118793.dkr.ecr.us-east-2.amazonaws.com/onestapp-repository-prod:5ea9baa2a6165a91c97aee3c037b593f708b33e7", "startTimeout": null, "firelensConfiguration": null, "dependsOn": null, "disableNetworking": null, "interactive": null, "healthCheck": null, "essential": true, "links": null, "hostname": null, "extraHosts": null, "pseudoTerminal": null, "user": null, "readonlyRootFilesystem": false, "dockerLabels": null, "systemControls": null, "privileged": null, "name": "onestapp-container-prod" } ], "placementConstraints": [], "memory": "1024", "taskRoleArn": null, "compatibilities": [ "EXTERNAL", "EC2" ], "taskDefinitionArn": "arn:aws:ecs:us-east-2:637960118793:task-definition/onestapp-task-prod:25", "networkMode": null, "runtimePlatform": null, "cpu": "1024", "revision": 25, "status": "ACTIVE", "inferenceAccelerators": null, "proxyConfiguration": null, "volumes": [] } ``` The service its using ALB and its using same Target Group as ALB, my task its running, and i can access using public ip from instance, but the target group does not have registered my tasks.
0
answers
0
votes
3
views
asked 2 months ago

Ingress annotations only for a specific path

Hi, I have this ingress configuration: ``` apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: "oidc-ingress" annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=300 external-dns.alpha.kubernetes.io/hostname: example.com !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! alb.ingress.kubernetes.io/auth-type: oidc alb.ingress.kubernetes.io/auth-on-unauthenticated-request: authenticate alb.ingress.kubernetes.io/auth-idp-oidc: '{"issuer":"https://login.microsoftonline.com/some-id/v2.0","authorizationEndpoint":"https://login.microsoftonline.com/some-id/oauth2/v2.0/authorize","tokenEndpoint":"https://login.microsoftonline.com/some-id/oauth2/v2.0/token","userInfoEndpoint":"https://graph.microsoft.com/oidc/userinfo","secretName":"aws-alb-secret"}' !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! spec: rules: - http: paths: - pathType: Prefix path: / backend: service: name: ssl-redirect port: name: use-annotation - pathType: Prefix path: /jenkins backend: service: name: jenkins port: number: 8080 - pathType: Prefix path: / backend: service: name: apache port: number: 80 ``` If I `kubectl appy` this `Ingress` config it will apply `annotations` to all routing rules, which means: ``` /* /jenkins /jenkins/* ``` I would like to apply `OIDC annotations` only for the `Jenkins rules`, it means: 1. If I open `https://example.com` it will be available to everyone. 2. If I open `https://example.com/jenkins`, it will redirect me to `OIDC auth` page. I can do this manually through `AWS console` when I remove `authenticate rule` from `/*` and leave it for `/jenkins/*` only. However I would like to achieve this through `Ingress annotations` to be able to automate this process. Please how can I do this? Thanks for your help.
2
answers
0
votes
182
views
asked 2 months ago

High-Traffic, Load-Balanced Wordpress Site - Optimal DevOps setup for deployment?

TLDR: I inherited a Wordpress site that I now manage that had a DevOps deployment pipeline that worked when the site was low to medium traffic, but now the site consistently gets high-traffic and I'm trying to improve the deployment pipeline. The site I inherited uses Lightsail instances and a Lightsail load balancer in conjunction with one RDS database instance and an S3 bucket for hosted media. When I inherited the site, the deployment pipeline from the old developer was: *Scale site down to one instance, make changes to that one instance, once changes are complete, clone that updated instance as many times as you need* This worked fine when the site mostly ran with only one instance except during peak traffic times. However, now at all times we have 3-5 instances as even our "off-peak" traffic is really high requiring multiple instances. I'd like to improve the deployment pipeline to allow for deployment during peak-traffic times without issues. I'm worried about updating multiples instances behind the load balancer one by one sequentially because we have Session Persistence disabled to allow for more evenly distributed load balancing. And I'm worried a user hopping to different instances that have a different functions.php file will cause issues. Should I just enable session persistence when I want to make updates and sequentially updates instances behind the load balancer one by one? Or is there a better suited solution? Should I move to a containers setup? I'm admittedly a novice with AWS so any help is greatly appreciated. Really just looking for general advice and am confident I can figure out how to implement a suggested best-practice solution. Thanks!
1
answers
0
votes
18
views
asked 2 months ago

LoadBalancer health check fails but instance is not terminating

Hello, I have a load balancer which as you know keeps the health check for the web app/website. I have deployed nothing in my instance means no app/site so when anyone visits the Loadbalancer URL they see a 502 Bad gateway error which is fine. and also in the target group, it shows that an instance has failed the health check but the thing is that the auto-scaling group is not terminating the failed health check instance and replacing it. Below is the Cloudformation code ``` AutoScailingGroup: Type: AWS::AutoScaling::AutoScalingGroup Properties: VPCZoneIdentifier: - Fn::ImportValue: !Sub ${EnvironmentName}-PR1 - Fn::ImportValue: !Sub ${EnvironmentName}-PR2 LaunchConfigurationName: !Ref AppLaunchConfiguration MinSize: 1 MaxSize: 4 TargetGroupARNs: - Ref: WebAppTargetGroup AppLoadBalancer: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: SecurityGroups: - Ref: ApplicationLoadBalancerSecurityGroup Subnets: - Fn::ImportValue: !Sub ${EnvironmentName}-PU1 - Fn::ImportValue: !Sub ${EnvironmentName}-PU2 Tags: - Key: Name Value: !Ref EnvironmentName Listener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: !Ref WebAppTargetGroup LoadBalancerArn: !Ref AppLoadBalancer Port: "80" Protocol: HTTP LoadBalancerListenerRule: Type: AWS::ElasticLoadBalancingV2::ListenerRule Properties: Actions: - Type: forward TargetGroupArn: !Ref WebAppTargetGroup Conditions: - Field: path-pattern Values: [/] ListenerArn: !Ref Listener Priority: 1 WebAppTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: / HealthCheckProtocol: HTTP HealthCheckTimeoutSeconds: 8 HealthyThresholdCount: 2 Port: 80 Protocol: HTTP UnhealthyThresholdCount: 5 VpcId: Fn::ImportValue: Fn::Sub: "${EnvironmentName}-VPCID" ```
1
answers
0
votes
85
views
asked 3 months ago

AWS Load Balancer not reaching LightSail instance

I would like to protect my lightsail instances with a AWS-WAF. For that, I need an EC2 Load Balancer instead of the lightsail one. I´ve implemented the following steps (all with root user): 1. Enable VPC peering in lightsail, on the correspondent zone, let say 'Ireland'. 2. AWS VPC is default and in Ireland. 3. Create a Target Group of type IP Address, on previous default VPC; Network 'Other Private IP Address' and the private address of the lightsail instance (instance has an apache listening on port 80). Checked that targets are 'Healty' on Target Group. 4. Create a LoadBalancer in the Default VPC, with the previous created Target Group, and with zones 'a' and 'b' of Ireland. Zone 'a' is the zone of where the lightsail instance is. 5. On Route 53 created a public hosted zone, with the name of my domain (registered directly in Route 53). 6. Create a DNS A record of type 'Alias', with linked point 'Alias Application Load Balancer', in region Ireland and pointing to previous created Load Balancer (showed for selection with the name of the LB, but wit 'dualstack.' appended to it). 6.1. Also tried resolving the LB DNS and creating the DNS A record to point directly to the IP instead of the 'Alias'. After all these steps, when trying to browse to my domain, I´m getting an "ERR_CONNECTION_TIMED_OUT". Ping to domain resolves to same ip that Load Balancer DNS; Security Groups in AWS allow all traffic; there is route in AWS to internal network of LightSail (created automatically when peering VPCs in step 1); ACL or Firewall are allowing all traffic; on ligthsail all traffic is allowed as well. What I could be missing? At that point and with all the steps reviewed, I can´t not figure out where the issue is.
2
answers
0
votes
38
views
asked 3 months ago

Wordpress/Nginx Failing under loads with ALB, returning 502.

Hello, I am testing a Wordpress Instance (PHP8) on Elastic beanstalk setup with a load balancer, running through Nginx. When I visit the website regularly, it works fine and I am able to browse the Wordpress instance. I wanted to try to perform a very simple load test, so I am using artillery (https://www.artillery.io/) as a load/smoke tester. As soon as the load on the server rises the server starts returning 502 responses. I've taken a look at the logs and I am seeing this. > ---------------------------------------- /var/log/nginx/access.log ---------------------------------------- 10.0.0.20 - - [08/Feb/2022:20:45:56 +0000] "GET / HTTP/1.1" 499 0 "-" "Artillery (https://artillery.io)" "IP-ADDRESS" > 10.0.0.20 - - [08/Feb/2022:20:45:57 +0000] "GET / HTTP/1.1" 499 0 "-" "Artillery (https://artillery.io)" "IP-ADDRESS" The error log says: > ---------------------------------------- /var/log/nginx/error.log ---------------------------------------- 2022/02/08 20:45:44 [error] 2882#2882: *1656 connect() to unix:/run/php-fpm/www.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 10.0.0.20, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/www.sock:", host: "HOSTNAME.COM" I've been doing some research and it seems like some people suggest that its a PHP issue, or its an Nginx issue or a Load balancer issue. I'm not sure where to go with this, I've been trying to debug the issue and try a few different things, has anyone run into this? Does anyone have any debugging tips? Any help/guidance would be appreciated.
3
answers
0
votes
63
views
asked 5 months ago
  • 1
  • 90 / page