Questions tagged with High Performance Compute

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Elasticache vertical scale up strange behaviour

Hi community! In my application, ElastiCache (Cluster mode disabled) is used in two scenarios, daily: 1. Intense usage, for about 3 hours, in which we need an improved network performance, and is run with cache.m6g.2xlarge 2. Light usage, for the rest of the day, in which a cache.m6g.large would be more than enough. We now use the 2xlarge 24/7, but would be nice to be able to vertically scale up and down during the intensive hours. However, when we do a scale up (large ⇾ 2xlarge) right before the heavy process, the behavior of the instance is not the same if we don't scale (keep 2xlarge for the whole day). Just for comparison, the first graph shows the Network Bytes In when there is a scale up right before the process, and the same metric when there isn't: ![With scale up, from cache.m6g.large to cache.m6g.2xlarge, reaching a max of 13Gb per minute](/media/postImages/original/IMox_zRCrAQyqlKEXGQo_Ixg) With scale up, from cache.m6g.large to cache.m6g.2xlarge, reaching a max of 13Gb per minute ![No scale up, instance cache.m6g.2xlarge reaches a max of 24Gb per minute](/media/postImages/original/IMkONubqtMS2SW_dQK_FAOAQ) No scale up, reaches a max of 24Gb per minute Note that the cache process only starts after the cluster Status is set to available. This drop in the Network Bytes In rate shouldn't be happening, and it is making the scale option to be impracticable to us. What is the point in giving an online scaling feature that does not work as it should after the scaling? Has anyone experienced something similar, and do you know of any alternatives to accomplish our goal, to provide performance only during the cache hours, keeping our costs reasonable? Thanks.
1
answers
1
votes
36
views
asked 19 days ago

AWS Batch requesting more VCPU's than tasks require

Hi, We have an AWS Batch compute environment set up to use EC2 spot instances, with no limits on instance type, and with the `SPOT_CAPACITY_OPTIMIZED` allocation strategy. We submitted a task requiring 32 VCPUs and 58000MB memory (which is 2GB below the minimum amount of memory for the smallest 32 VCPU instance size, c3.8xlarge, just to leave a bit of headroom), which is reflected in the job status page. We expected to receive an instance with 32 VCPUs and >64GB memory, but received an `r4.16xlarge` with 64 VCPUs and 488GB memory. An `r4.16xlarge` is rather oversized for the single task in the queue, and our task can't take advantage of the extra cores, as we pin the processes to the specified number of cores so multiple tasks scheduled on the same host don't contend over CPU. We have no other tasks in the queue and no currently-running compute instances, nor any desired/minimum set on the compute environment before this task was submitted. In the autoscaling history, it shows: `a user request update of AutoScalingGroup constraints to min: 0, max: 36, desired: 36 changing the desired capacity from 0 to provide the desired capacity of 36` Where did this 36 come from? Surely this should be 32 to match our task? I'm aware that the docs say: `However, AWS Batch might need to exceed maxvCpus to meet your capacity requirements. In this event, AWS Batch never exceeds maxvCpus by more than a single instance.` But we're concerned that once we start scaling up, each task will be erroneously requested with 4 extra VCPUs. I'm guessing this is what happened in this case is due to the `SPOT_CAPACITY_OPTIMIZED` allocation strategy. * Batch probably queried for the best available host to meet our 32 VCPU requirement and got the answer c4.8xlarge, which has 36 cores. * Batch then told the autoscaling group to scale to 36 cores, expecting to get a c4.8xlarge from the spot instance request. * The spot instance allocation strategy is currently set to `SPOT_CAPACITY_OPTIMIZED`, which prefers instances that are less likely to be killed (rather than preferring the cheapest/best fitting). * The spot instance request looked at the availability of c4.8xlarge and decided that they were too likely to be killed under the `SPOT_CAPACITY_OPTIMIZED` allocation strategy, and decided to sub it in with the most-available host that matched the 36 core requirement set by batch, which turned out to be an oversized 64 VCPU r5 instead of the better-fitting-for-the-task 32 or 48 VCPU R5. But the above implies that Batch itself doesn't follow the same logic as the `SPOT_CAPACITY_OPTIMIZED`, and instead requests the specs of the "best fit" host even if that host will not be provided by the spot request, resulting in potentially significantly oversized hosts. Alternatively, the 64 VCPU r5 happened to have better availability than the 48/32 VCPU r5, but I don't see how that would be possible, since the 64 VCPU r5 is just 2*the 32 VCPU one, and these are virtualised hosts, so you would expect the availability of the 64 VCPU to be half that of the 32 VCPU one. Can it be confirmed if either of my guesses here are correct, or if I'm thinking about it the wrong way, or if we missed a configuration setting? Thanks!
0
answers
0
votes
21
views
asked a month ago