- 最新
- 投票最多
- 评论最多
Hello,
Hadoop 3.3.3 introduced a change in YARN (YARN-9608) that keeps nodes where containers ran in a decommissioning state until the application completes. This change ensures that local data such as shuffle data doesn't get lost, and you don' need to re-run the job. This approach might also lead to underutilization of resources on clusters with or without managed scaling enabled.
With Amazon EMR releases 6.11.0 and higher as well as 6.8.1, 6.9.1, and 6.10.1, the value of yarn.resourcemanager.decommissioning-nodes-watcher.wait-for-applications
is set to false
in yarn-site.xml to resolve this issue.
yarn.resourcemanager.decommissioning-nodes-watcher.wait-for-applications
- If true (the default), the resource manager waits for all containers, as well as all applications associated with those containers, to finish before gracefully decommissioning a node.
If false, the resource manager only waits for containers, but not applications, to finish. For map-only jobs or other jobs in which mappers do not need to serve shuffle data, this allows nodes to be decommissioned as soon as their containers are finished as opposed to when the job is done.
Add property yarn.resourcemanager.decommissioning-nodes-watcher.wait-for-app-masters
. - If false, during graceful decommission, when the resource manager waits for all containers on a node to finish, it will not wait for app master containers to finish. Defaults to true. This property should only be set to false if app master failure is recoverable.
References:
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-6101-release.html
相关内容
- AWS 官方已更新 1 年前
Excellent, thank you so much for this information!