force auto scaling group to scale in by terminating k8s pods ungracefully
0
When I perform a scale in task, it's very common that some k8s pods got stuck at terminating stage forever. As a result, the instance would also got stuck at terminating:wait state. Is there a way to force auto-scaling-group to scale down the instances by ungracefully shutting down all pods on the instance after a given period?