2 Risposte
- Più recenti
- Maggior numero di voti
- Maggior numero di commenti
1
Hello, a couple of things
- For the AutoScalingGroup variable, it would be simpler if you pulled it from IMDS like the 2 other variables by checking the
aws:autoscaling:groupName
tag value - This looks like a userdata script, meaning its getting run every time a new instance is launched in the ASG (and thus trying to re-create the scaling policy each time). You should be creating the scaling policy in the EBExtensions.
- I recommend using Target Tracking instead of simple scaling. Its much easier to setup since you don't have to manually define alarms, and you only have to define a single policy that handles both scale-in and scale-out
- Unless there's a unit defined when the metric is being pushed to CloudWatch, you shouldn't include a unit on the alarm, otherwise the alarm will be looking for a non-existent metric (a metric is a unique combination of Namespace, MetricName, [optional] Dimension(s), [optional] Unit )
- The dimensions on the alarm aren't setup correctly. You want to configure the agent to push a single aggregate metric with just AutoScalingGroupName. You're then scaling off the aggregate values of every instance in the ASG
As a side note, if you're at the point where you're doing this much customization, it might be time to look into migrating off elastic beanstalk to just define everything in CloudFormation natively. You'll have a lot more control, and the more customization you add to a beanstalk environment the more complex it can be to try and keep track of
con risposta 2 anni fa
0
What i've reached so far: In the .ebextensions i created 2 config files (The problem i have is that the ScalingOnMemoryAlarmHigh and ScalingOnMemoryAlarmLow are in state "Insufficient data" and i couldn't make them work)
1- 01_cloudwatch.config to configure the cloudwatch agent
files:
"/opt/aws/amazon-cloudwatch-agent/bin/config.json":
mode: "000600"
owner: root
group: root
content: |
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"InstanceId": "${aws:InstanceId}"
},
"aggregation_dimensions" : [["AutoScalingGroupName"]],
"metrics_collected": {
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
},
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
}
}
}
}
2- 02_scaling.config to create the cloudwatch alarm to trigger the scaling policy
Resources:
AWSEBCloudwatchAlarmHigh:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmActions: []
AWSEBCloudwatchAlarmLow:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmActions: []
ScalingOnMemoryAlarmHigh:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmDescription: "Scale up when memory used percent > 80"
Namespace: "CWAgent"
MetricName: mem_used_percent
Dimensions:
- Name: AutoScalingGroupName
Value:
Ref: AWSEBAutoScalingGroup`
Statistic: Average
Period: 300
EvaluationPeriods: 1
Threshold: 80
ComparisonOperator: GreaterThanThreshold
AlarmActions:
- Ref: AWSEBAutoScalingScaleUpPolicy
ScalingOnMemoryAlarmLow:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmDescription: "Scale down when memory used percent < 20"
Namespace: "CWAgent"
MetricName: mem_used_percent
Dimensions:
- Name: AutoScalingGroupName
Value:
Ref: AWSEBAutoScalingGroup
Statistic: Average
Period: 300
EvaluationPeriods: 1
Threshold: 20
ComparisonOperator: LessThanThreshold
AlarmActions:
- Ref: AWSEBAutoScalingScaleDownPolicy
con risposta 2 anni fa
Contenuto pertinente
- AWS UFFICIALEAggiornata 9 mesi fa
- AWS UFFICIALEAggiornata 2 anni fa
- AWS UFFICIALEAggiornata 2 anni fa
- AWS UFFICIALEAggiornata 4 anni fa
Thanks for your reply, I figured out how to create it in the EBExtensions but i couldn't make the Cloudwatch alarm collect the data, Could you check the answer below i'll post the code snippets i used
There's 2 possible issues. Either the metric isn't being pushed, or the alarm configuration doesn't exactly match the metric values. Go into your CW Console and see if the namespace CWAgent is showing up to confirm if the metric is being pushed or not. If its not, then you'll need to troubleshoot the CW Agent. If the metric is showing on the console/getting pushed, check all the metric settings to see which one doesn't exactly match between the metric and the alarm by describing the metric via the CLI