Questions tagged with Amazon CloudWatch

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Recently we have been seeing that when opening AWS Cloudwatch Insights the Discovered Fields Panel opens on it's own when opening for the first time or in a new tab/window. At first we figured it would eventually go away but after two weeks it seems to still be happening. I've tried going through the settings but can't seem to find anything that controls the behavior or the panel. ![Insights](/media/postImages/original/IMAZwsQsbzT0OTQcUCFYg7vw)
1
answers
0
votes
3
views
asked 2 days ago
Hi, We are looking into ways to best monitor our Beanstalk environment, in the most dynamic way possible. Meaning we would like to be able that whenever we have a new instance to set corresponding metric alarms, we are interested in. Since our environment was built long time ago and does not have much automation around it I have decide to leverage some CDK code, and build Clouwatch code that we can run on a schedule to detect differences and add/remove monitoring to new instances accordingly. Another option was to use eventBridge and trigger lambdas and based on some SDK code get the proper monitoring in place. The question is, based on the situation above, does eb-extensions or any other Beanstalk feature allow us to set "metric alarms" from within Beanstalk during deployment (not using the console), or other ways using other parts of the Beanstalk infrastructure to set those alarms upon deploying new app build versions? **Update:** After doing some digging into eb-extensions looks like we can use custom resources, but is there a way that when setting the alarm I can use insatnce IDs for the dimensions, to set the alarm per instance in the environment ? example: ``` MetricName: "RootFilesystemUtil" Namespace: "AWS/ElasticBeanstalk" Dimensions: - Name: InstanceId Value: <ref_all_instances_in_the_env> ```
1
answers
0
votes
14
views
Ziad
asked 2 days ago
I have a task where I'm required to make sure all my GuardDuty logs from multiple accounts are logged to one account using a centralized logging solution. At the moment, I'm trying to find a way either via console or cli, or both to confirm if my guardduty logs are centralized in the account I am in. Is there an easy way to confirm this?
1
answers
0
votes
18
views
asked 4 days ago
Afternoon all, I received an email saying I had connections with a few S3 buckets so I know the 3 however I can't understand how to get more data. The 3 buckets all have logging on so I have an s3 bucket with the logs, but the 2 ways they say are; 1. Log Insights. - For this it appears I need to have the actual logs goto cloudwatch logs as I don't see a way of selecting the S3 admin that has the logs. 2. CloudTrail / Lake. This looks even easier, the doc here - https://aws.amazon.com/blogs/mt/using-aws-cloudtrail-lake-to-identify-older-tls-connections-to-aws-service-endpoints/ I thought was the answer, but I am stuck and it maybe just the data store part. There is just that 1 line, create a data store, but I did create one, I believe the 'events' should be cloudtrail and not configuration items. Then for data events I have tried S3, s3 access points (as I am sure its one of those) and when I copy the sample query for TLS calls I get an invalid query. I even tried other sample ones and all do the same thing, immediate red x. The sample query is here; SELECT eventSource, COUNT(*) AS numOutdatedTlsCalls FROM $EDS_ID WHERE tlsDetails.tlsVersion IN ('TLSv1', 'TLSv1.1') AND eventTime > '2023-01-17 00:00:00' GROUP BY eventSource ORDER BY numOutdatedTlsCalls DESC So any help on the best way to get that info is appreciated.
1
answers
0
votes
18
views
asked 4 days ago
Currently, we run several lambda functions in AWS. We have recently deployed a new function that is web-facing. This function needs to serve a high volume of end users thus requiring the use of provisioned concurrency. I have configured provisioned concurrency however can not see the following metrics. - ProvisionedConcurrentExecutions - ProvisionedConcurrencyUtilization - UnreservedConcurrentExecutions - ProvisionedConcurrencyInvocations - ProvisionedConcurrencySpilloverInvocations I can confirm I have provisioned concurrency enabled and can see concurrent invocations via the `ConcurrentExecutions` metric.
1
answers
0
votes
23
views
asked 5 days ago
I am trying to create an AWS Cloud watch event which will trigger an email whenever a S3 bucket is created or modified to allow public access. I have created the cloud trail, log stream and am tracking all the S3 events logs. When i am trying to create a custom event by giving the pattern to detect S3 buckets with public access i am not able to fetch any response or the event doesn't get triggered even if i create bucket with public access. Can you help me out with the custom pattern for the same ? I have tried giving GetPublicAccessBlock, PutPublicAccessBlock etc., in event type but no luck. Please suggest accordingly.
1
answers
0
votes
14
views
asked 5 days ago
I have deployed an Autogluon model to enable be invoke a sagemaker endpoint on the lambda. I keep receiving this error in my CloudWatch. Any help will really be appreciated...![cloudwatch](/media/postImages/original/IMm2JGPoxcSuGm5Lew5ZaQTA)
0
answers
0
votes
10
views
asked 8 days ago
hi Team, I am planning to migrate one of my customers to AWS Fargate and so we want to setup logging for the same as well and store all the logs in cloudwatch. I could see we have two options in Fargate - either use default awslogs log driver or use AWS Firelens to gather logs. I read all the documentations but unfortunately still not able to figure out which option to use and when. Also can someone advise on cost side as well- which option cost how much between using awslogs driver vs aws firelens to send logs to cloudwatch in the same account? [I am looking for easy and efficient and cost effective option] Can someone please advise?
2
answers
0
votes
28
views
asked 8 days ago
**I used Aws-cloudwatch to get the size of bucket but the result is different from aws metric dashboard of bucket.** **Ex**. I uploaded some amount of data manually in bucket and check the size of bucket after some days but it showing different in api call and showing different on metric dashboard. I'm getting 99GB bucket size on metric dashboard but from api call it showing 120GB so it's weird, I'm not getting how this happend. And also how aws will charges us, is it based on the data size showing on metric dashboard or on based on the result fetched by cloudwatch api call. cloudwatch cli command : ``` $ aws cloudwatch get-metric-statistics --namespace "AWS/S3" --start-time 2023-01-14 --end-time 2023-01-15 --metric-name BucketSizeBytes --period 3600 --statistics Average --unit Bytes --dimensions Name=BucketName,Value=MyBucketName Name=StorageType,Value=StandardStorage ```
1
answers
0
votes
31
views
asked 8 days ago
I wrote a Python Shell job on AWS glue and it is throwing "Out of Memory Error". I have added print() function to view the outputs in the Cloudwatch logs of the lines that are successfully executed but I cannot see the outputs on CloudWatch - neither in error logs nor in output logs. Additionally, I am also not able to see which line's execution is causing this "Out of Memory Error" even in the error logs. The only logs I can see is that of Installation of Python modules. I have tried running the Glue Job multiple times but haven't been able to see the things that I mentioned above anytime. Can someone help me out here?
1
answers
0
votes
11
views
akc_adi
asked 9 days ago
Hello mates, I am working on observability. I have a Windows Server 2016, I installed a web application that produces logs. To do the observability, I turned to CloudWatch, to visualize logs and metrics. I installed a CloudWatch agent on the Windows server. Here is the configuration file: ``` { "agent": { "metrics_collection_interval": 5, "logfile": "C:\\ProgramData\\Amazon\\AmazonCloudWatchAgent\\Logs\\amazon-cloudwatch-agent.log", "region": "eu-central-1", "debug": true }, "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "C:\\ProgramData\\Amazon\\AmazonCloudWatchAgent\\Logs\\amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent-group-log.log", "log_stream_name": "amazon-cloudwatch-agent-stream-log.log", "timezone": "UTC" }, { "file_path": "C:\\Users\\michael.ranivo\\Docuements\\Monitoring\\Middleware\\questions.txt", "log_group_name": "test-middleware-group-logs", "log_stream_name": "test-middleware-stream-logs", "timezone":"Local" } ] } }, "force_flush_interval" : 5 }, "metrics": { "metrics_collected": { "namespace": "test-middleware-metrics", "statsd": {}, "Processor": { "measurement": [ {"name": "% Idle Time", "rename": "CPU_IDLE", "unit": "Percent"}, "% Interrupt Time", "% User Time", "% Processor Time" ], "resources": [ "*" ], "append_dimensions": { "d1": "win_foo", "d2": "win_bar" } }, "LogicalDisk": { "measurement": [ {"name": "% Idle Time", "unit": "Percent"}, {"name": "% Disk Read Time", "rename": "DISK_READ"}, "% Disk Write Time" ], "resources": [ "*" ] }, "Memory": { "metrics_collection_interval": 5, "measurement": [ "Available Bytes", "Cache Faults/sec", "Page Faults/sec", "Pages/sec" ], "append_dimensions": { "d3": "win_bo" } }, "Network Interface": { "metrics_collection_interval": 5, "measurement": [ "Bytes Received/sec", "Bytes Sent/sec", "Packets Received/sec", "Packets Sent/sec" ], "resources": [ "*" ], "append_dimensions": { "d3": "win_bo" } }, "System": { "measurement": [ "Context Switches/sec", "System Calls/sec", "Processor Queue Length" ], "append_dimensions": { "d1": "win_foo", "d2": "win_bar" } } }, "append_dimensions": { "ImageId": "${aws:ImageId}", "InstanceId": "${aws:InstanceId}", "InstanceType": "${aws:InstanceType}", "AutoScalingGroupName": "${aws:AutoScalingGroupName}" }, "aggregation_dimensions" : [["ImageId"], ["InstanceId", "InstanceType"], ["d1"],[]] } } ``` When I launch the agent, with this command: ``` & "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m onPremise -s -c file:"C:\\ProgramData\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent.json" ``` The agent launches well but when I look at the agent’s logs I have this: ``` 2023/01/19 16:56:29 I! Config has been translated into TOML C:\ProgramData\Amazon\AmazonCloudWatchAgent\\amazon-cloudwatch-agent.toml 2023/01/19 16:56:29 D! toml config [agent] collection_jitter = "0s" debug = true flush_interval = "1s" flush_jitter = "0s" hostname = "" interval = "5s" logfile = "C:\\ProgramData\\Amazon\\AmazonCloudWatchAgent\\Logs\\amazon-cloudwatch-agent.log" logtarget = "lumberjack" metric_batch_size = 1000 metric_buffer_limit = 10000 omit_hostname = false precision = "" quiet = false round_interval = false [inputs] [[inputs.logfile]] destination = "cloudwatchlogs" file_state_folder = "C:\\ProgramData\\Amazon\\AmazonCloudWatchAgent\\Logs\\state" [[inputs.logfile.file_config]] file_path = "C:\\ProgramData\\Amazon\\AmazonCloudWatchAgent\\Logs\\amazon-cloudwatch-agent.log" from_beginning = true log_group_name = "server-perso-amazon-cloudwatch-agent-group-log.log" log_stream_name = "server-perso-amazon-cloudwatch-agent-stream-log.log" pipe = false retention_in_days = -1 timezone = "UTC" [[inputs.logfile.file_config]] file_path = "C:\\Users\\leka\\Documents\\tests.txt" from_beginning = true log_group_name = "server-perso-test-middleware-group-logs" log_stream_name = "server-perso-test-middleware-stream-logs" pipe = false retention_in_days = -1 timezone = "LOCAL" [inputs.logfile.tags] metricPath = "logs" [[inputs.statsd]] interval = "10s" parse_data_dog_tags = true service_address = ":8125" [inputs.statsd.tags] "aws:AggregationInterval" = "60s" metricPath = "metrics" [[inputs.win_perf_counters]] DisableReplacer = true [[inputs.win_perf_counters.object]] Counters = ["% Idle Time", "% Disk Read Time", "% Disk Write Time"] Instances = ["*"] Measurement = "LogicalDisk" ObjectName = "LogicalDisk" WarnOnMissing = true [inputs.win_perf_counters.tags] "aws:StorageResolution" = "true" metricPath = "metrics" [[inputs.win_perf_counters]] DisableReplacer = true interval = "5s" [[inputs.win_perf_counters.object]] Counters = ["Available Bytes", "Cache Faults/sec", "Page Faults/sec", "Pages/sec"] Instances = ["------"] Measurement = "Memory" ObjectName = "Memory" WarnOnMissing = true [[inputs.win_perf_counters.object]] Counters = ["Bytes Received/sec", "Bytes Sent/sec", "Packets Received/sec", "Packets Sent/sec"] Instances = ["*"] Measurement = "Network Interface" ObjectName = "Network Interface" WarnOnMissing = true [inputs.win_perf_counters.tags] "aws:StorageResolution" = "true" d3 = "win_bo" metricPath = "metrics" [[inputs.win_perf_counters]] DisableReplacer = true [[inputs.win_perf_counters.object]] Counters = ["% Idle Time", "% Interrupt Time", "% User Time", "% Processor Time"] Instances = ["*"] Measurement = "Processor" ObjectName = "Processor" WarnOnMissing = true [[inputs.win_perf_counters.object]] Counters = ["Context Switches/sec", "System Calls/sec", "Processor Queue Length"] Instances = ["------"] Measurement = "System" ObjectName = "System" WarnOnMissing = true [inputs.win_perf_counters.tags] "aws:StorageResolution" = "true" d1 = "win_foo" d2 = "win_bar" metricPath = "metrics" [outputs] [[outputs.cloudwatch]] force_flush_interval = "60s" namespace = "server-perso-test-middleware-metrics" profile = "default" region = "eu-central-1" rollup_dimensions = [["ImageId"], ["InstanceId", "InstanceType"], ["d1"], []] shared_credential_file = "C:\\Users\\leka\\.aws\\credentials" tagexclude = ["host", "metricPath"] [[outputs.cloudwatch.metric_decoration]] category = "LogicalDisk" name = "% Idle Time" unit = "Percent" [[outputs.cloudwatch.metric_decoration]] category = "LogicalDisk" name = "% Disk Read Time" rename = "DISK_READ" [[outputs.cloudwatch.metric_decoration]] category = "Processor" name = "% Idle Time" rename = "CPU_IDLE" unit = "Percent" [outputs.cloudwatch.tagpass] metricPath = ["metrics"] [[outputs.cloudwatchlogs]] force_flush_interval = "5s" log_stream_name = "wind" profile = "default" region = "eu-central-1" shared_credential_file = "C:\\Users\\leka\\.aws\\credentials" tagexclude = ["metricPath"] [outputs.cloudwatchlogs.tagpass] metricPath = ["logs"] [processors] [[processors.ec2tagger]] ec2_instance_tag_keys = ["aws:autoscaling:groupName"] ec2_metadata_tags = ["ImageId", "InstanceId", "InstanceType"] profile = "default" refresh_interval_seconds = "0s" shared_credential_file = "C:\\Users\\leka\\.aws\\credentials" [processors.ec2tagger.tagpass] metricPath = ["metrics"] 2023-01-19T15:56:29Z I! Starting AmazonCloudWatchAgent 1.247357.0 2023-01-19T15:56:29Z I! AWS SDK log level not set 2023-01-19T15:56:29Z I! Loaded inputs: logfile statsd win_perf_counters (3x) 2023-01-19T15:56:29Z I! Loaded aggregators: 2023-01-19T15:56:29Z I! Loaded processors: ec2tagger 2023-01-19T15:56:29Z I! Loaded outputs: cloudwatch cloudwatchlogs 2023-01-19T15:56:29Z I! Tags enabled: host=wind 2023-01-19T15:56:29Z I! [agent] Config: Interval:5s, Quiet:false, Hostname:"wind", Flush Interval:1s 2023-01-19T15:56:29Z D! [agent] Initializing plugins 2023-01-19T15:56:29Z I! [processors.ec2tagger] ec2tagger: Check EC2 Metadata. 2023-01-19T15:56:29Z D! Successfully created credential sessions 2023-01-19T15:56:29Z I! [logagent] starting 2023-01-19T15:56:29Z I! [logagent] found plugin cloudwatchlogs is a log backend 2023-01-19T15:56:29Z I! [logagent] found plugin logfile is a log collection 2023-01-19T15:56:30Z D! [logagent] open file count, 0 2023-01-19T15:56:31Z D! [logagent] open file count, 0 2023-01-19T15:56:32Z D! [logagent] open file count, 0 2023-01-19T15:56:33Z D! [logagent] open file count, 0 2023-01-19T15:56:34Z D! [logagent] open file count, 0 2023-01-19T15:56:35Z D! [logagent] open file count, 0 2023-01-19T15:56:36Z D! [logagent] open file count, 0 2023-01-19T15:56:37Z D! [logagent] open file count, 0 2023-01-19T15:56:38Z D! [logagent] open file count, 0 2023-01-19T15:56:39Z D! [logagent] open file count, 0 2023-01-19T15:56:40Z D! [logagent] open file count, 0 2023-01-19T15:56:41Z D! [logagent] open file count, 0 2023-01-19T15:56:42Z D! [logagent] open file count, 0 2023-01-19T15:56:43Z D! [logagent] open file count, 0 2023-01-19T15:56:44Z D! [logagent] open file count, 0 2023-01-19T15:56:45Z D! [logagent] open file count, 0 2023-01-19T15:56:46Z D! [logagent] open file count, 0 2023-01-19T15:56:47Z D! [logagent] open file count, 0 2023-01-19T15:56:48Z D! [logagent] open file count, 0 2023-01-19T15:56:49Z D! [logagent] open file count, 0 2023-01-19T15:56:50Z D! [logagent] open file count, 0 2023-01-19T15:56:51Z D! [logagent] open file count, 0 2023-01-19T15:56:52Z D! [logagent] open file count, 0 2023-01-19T15:56:54Z D! [logagent] open file count, 0 2023-01-19T15:56:54Z D! [logagent] open file count, 0 2023-01-19T15:56:55Z D! [logagent] open file count, 0 2023-01-19T15:56:56Z D! [logagent] open file count, 0 2023-01-19T15:56:57Z D! [logagent] open file count, 0 2023-01-19T15:56:58Z D! [logagent] open file count, 0 2023-01-19T15:56:59Z D! [logagent] open file count, 0 2023-01-19T15:56:59Z I! CWAGENT_LOG_LEVEL is set to "DEBUG" 2023-01-19T15:57:00Z D! [logagent] open file count, 0 2023-01-19T15:57:01Z D! [logagent] open file count, 0 2023-01-19T15:57:02Z D! [logagent] open file count, 0 2023-01-19T15:57:03Z D! [logagent] open file count, 0 2023-01-19T15:57:04Z D! [logagent] open file count, 0 2023-01-19T15:57:05Z D! [logagent] open file count, 0 2023-01-19T15:57:06Z D! [logagent] open file count, 0 2023-01-19T15:57:07Z D! [logagent] open file count, 0 2023-01-19T15:59:18Z E! Failed to get credential from session: NoCredentialProviders: no valid providers in chain caused by: EnvAccessKeyNotFound: failed to find credentials in the environment. SharedCredsLoad: failed to load profile, . EC2RoleRequestError: no EC2 instance role found caused by: RequestError: send request failed caused by: Get "http://169.254.169.254/latest/meta-data/iam/security-credentials/": dial tcp 169.254.169.254:80: connectex: Une tentative de connexion a échoué car le parti connecté n’a pas répondu convenablement au-delà d’une certaine durée ou une connexion établie a échoué car l’hôte de connexion n’a pas répondu. 2023-01-19T15:59:18Z D! [logagent] open file count, 0 2023-01-19T15:59:19Z D! [logagent] open file count, 0 2023-01-19T15:59:20Z D! [logagent] open file count, 0 2023-01-19T15:59:21Z D! [logagent] open file count, 0 2023-01-19T15:59:22Z D! [logagent] open file count, 0 2023-01-19T15:59:23Z D! [logagent] open file count, 0 2023-01-19T15:59:24Z D! [logagent] open file count, 0 2023-01-19T15:59:25Z E! [processors.ec2tagger] ec2tagger: Unable to retrieve EC2 Metadata. This plugin must only be used on an EC2 instance. 2023-01-19T15:59:25Z E! [telegraf] Error running agent: could not initialize processor processors.ec2tagger: EC2MetadataRequestError: failed to get EC2 instance identity document caused by: RequestError: send request failed caused by: Get "http://169.254.169.254/latest/dynamic/instance-identity/document": context deadline exceeded (Client.Timeout exceeded while awaiting headers) 2023/01/19 16:59:25 E! Error when starting Agent, Error is exit status 1 ``` I removed the metric part from the configuration file and this worked. I have no idea how my agent cloudwatch can send the metrics to cloudwatch. Do you have any idea? Thank you in advance for what I read.
1
answers
0
votes
28
views
asked 9 days ago
I followed instruction https://aws.amazon.com/blogs/containers/introducing-amazon-cloudwatch-container-insights-for-amazon-eks-fargate-using-aws-distro-for-opentelemetry/ to deploy container insights to eks fargate. But there is nothing in cloudwatch->container insights dashboard. Is it supported in eks fargate? I also tried to deploy cloudwatch agent for prometheus in eks fargate by following instruction https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights-Prometheus-Setup.html. I still could not see anything in cloudwatch-> container insights dashboard. It says "You have not enabled insights on your containers"
1
answers
0
votes
23
views
Julie
asked 9 days ago