Questions tagged with Cost Optimization
Content language: English
Sort by most recent
As a part of increasing the AWS security score, I wish to set up a log metric filter and alarm for the following actions :- 1. Changes to network gateways 2. Route table changes 3. Changes to Network Access Control Lists (NACL) 4. Security group changes 5. VPC changes 6. Unauthorized API calls 7. Management Console sign-in without MFA 8. AWS Management Console authentication failures 9. CloudTrail configuration changes 10. IAM policy changes 11. S3 bucket policy changes 12. Disabling or scheduled deletion of customer-created CMKs There is just one root user. I wish to estimate the cost of this operation. [PFA screenshot of failed controls](/media/postImages/original/IMLIP77JscTuCyktECxGF3sg)
Do filters in direct mode mean that the raw underlying SQL query is adding or modifying a where clause? OR does it sorta scan everything then filter it after the underlying data has been scanned? I have multiple quicksight reports and analyses all using datasets that are cached in spice and from what I understand this means that all the queries are running off a cached disconnected dataset thats a copy of the actual underlying data and that copy is refreshed periodically. this works and its awesome. I also have a timestream table thats large in the region of terabytes of data. This dataset is obviously too large to import into spice, which means I must directly query the dataset. I've imported the dataset and have set everything up, but when viewing data my timestream costs over the last few days have shot up immensely. I normally sit around 2-3 usd a day, and on the day I released my report it jumped to 90$. If I look in the costing breakdown this is due to the amount of "Scanned bytes" in timestream. All of my reports use filters to segment and breakdown the data but this obviously isn't working the way it should. if my datasource is essentially "select * from table", and then i add a filter on the dataset does that mean the query sent to the datasource is "select * from table where column = filter" OR does it mean it loads all the rows and then does some other filtering after that? based off the speed of reports I assume its the first one, but if so then I need to figure out how to constrain the filters even more to load less data. i have disabled the report for now and my timestream costs have once again dropped down to normal levels, and at this point i'm too scared to re-enable it, but people are clamoring for their data :/
![t2.micro](/media/postImages/original/IMEQAisvnPRha7WAm-gHo6Qw) [official docs only says t2.micro Network Performance is Low to Moderate](https://aws.amazon.com/ec2/instance-types/), I think it is very confused. I want to know specific amount: 1Mbps, 1Gbps or others? Because I want to do TCP optimize, tune my debian server, like [this tutorial](https://cloud.google.com/architecture/tcp-optimization-for-network-performance-in-gcp-and-hybrid) and [this tutorial](https://aws.amazon.com/premiumsupport/knowledge-center/network-throughput-benchmark-linux-ec2)
Hi! Our EFS storage is comprised mostly of small files, many of which are lower than the [128 KB minimum requirement](https://docs.aws.amazon.com/efs/latest/ug/lifecycle-management-efs.html#metadata) to be transitioned to Infrequent Access storage. As a consequence, we're partially excluded from the cost savings that Intelligent Tiering brings to many other users. AWS regularly improves EFS though, with for instance [1-Day Lifecycle Management Policy](https://aws.amazon.com/about-aws/whats-new/2022/11/elastic-file-system-1-day-lifecycle-management-policy-reduce-costs-cold-data-sets/) released a few months ago. Similarly, in 2021 AWS [removed the 128 KB requirement for S3's Intelligent Tiering](https://www.infoq.com/news/2021/09/s3-efs-intelligent-tiering/). We are wondering if AWS has perhaps plans to remove that requirement at some point for EFS as well. Has anybody heard anything about that? If not, is there a recommended way for us to suggest that improvement to them?
Hello, I am getting into using Amazon Pinpoint to send emails via campaigns and I have noticed that the biggest cost I have is the addition of endpoints (proportionally the cost of sending is zero). ![Enter image description here](/media/postImages/original/IMejRSDn-mSVaIIcQHQI3eqQ) I currently load the endpoints via an API call (updateUserEndpoint) every time I send a campaign. Is it possible to reduce and curb these costs by combining Amazon Pinpoint with another service such as Amazon S3 (or others) ? Can endpoints be saved on some Amazon service to reduce MTA costs ?
I see a lot of pricing details for VpC endpoint and NAT GW. S VPC peering. How can I make a good decision in terms of cost optimization. How can I know which one fits my environment in terms of costs ?
Hi, I run EMR serverless jobs at the top of every hour. All the jobs are submitted to the same Application with no pre-initialized capacity. Is there any benefit from terminating the application between runs? There are about 30 minutes between runs. Any downsides in costs?
How to TDS cut from the time of bill payment in aws, and is there any reimbursement.
I'm using pinpoint sendMessage API to send push notifications, as per pinpoint documentation first 1M notifications are free and then each 1M notifications costs 1 USD. sometimes when send notifications i got different status : SUCCESSFUL | THROTTLED | TEMPORARY_FAILURE | PERMANENT_FAILURE | UNKNOWN_FAILURE | OPT_OUT | DUPLICATE my question is : is pinpoint count all notifications (requests) regardless its delivery status , or some of status are counted ?
Hello, I have deployed the Cost Optimization Data Collection Module. The Transit Gateway Lambda is failing with below erorr - [ERROR] Runtime.UserCodeSyntaxError: Syntax error in module 'index': invalid syntax (index.py, line 125) Traceback (most recent call last): File "/var/task/index.py" Line 125 aws_access_key_id=credentials['AccessKeyId'],
are there any sample cost and usage reports from really complex environments available for download?
The idea is to find a way to identify old **alarms** with **non-existent resources** and possibly delete them.