By using AWS re:Post, you agree to the AWS re:Post Terms of Use

How can I turn off Safemode for the NameNode service on my Amazon EMR cluster?

7 minute read
0

The NameNode service goes into Safemode when I try to run an Apache Hadoop or Apache Spark job on an Amazon EMR cluster. I tried turning Safemode off, but it comes back on immediately. I want to get the NameNode out of Safemode.

Short description

When running an Apache Hadoop or Apache Spark job on an Amazon EMR cluster, you might receive one of the following error messages:

"Cannot create file/user/test.txt._COPYING_. Name node is in safe mode."

"org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /user/hadoop/.sparkStaging/application_15xxxxxxxx_0001. Name node is in safe mode. It was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn safe mode off. NamenodeHostName:ip-xxx-xx-xx-xx.ec2.internal"

Safemode for the NameNode is a read-only mode for the Hadoop Distributed File System (HDFS) cluster. In Safemode, you can't make any modifications to the file system or blocks. After the DataNodes report that most file system blocks are available, the NameNode automatically leaves Safemode. However, the NameNode might enter Safemode for the following reasons:

  • Available space is less than the amount of space that's required for the NameNode storage directory. The amount of space that's required for the NameNode directory is defined in the parameter dfs.namenode.resource.du.reserved.
  • The NameNode can't load the FsImage and EditLog into memory.
  • The NameNode didn't receive the block report from the DataNode.
  • Some nodes in the cluster might be down. This makes the blocks on the nodes not available.
  • Some blocks might be corrupted.

Check for the root cause of the issue in the NameNode log location /var/log/hadoop-hdfs/.

Resolution

Before leaving Safemode, confirm that you know and understand why the NameNode is stuck in Safemode. Review the status of all DataNodes and the NameNode logs.

Important: In some cases, manually turning off Safemode can lead to data loss.

To manually turn off Safemode, run the following command:

sudo -u hdfs hadoop dfsadmin -safemode leave

Depending on the root cause of the error, complete one or more of the following troubleshooting steps to turn off Safemode.

Switch to a cluster with multiple primary nodes

Checkpointing isn't automatic in clusters with a single primary node. This means that HDFS edit logs aren't backed up to a new snapshot (FsImage) and removed. HDFS uses edit logs to record filesystem changes between snapshots. It's a best practice to manually remove the edit logs from a cluster with a single primary node. If you don't manually remove the edit logs, then the logs might use all the disk space in /mnt. To resolve this issue, launch a cluster with multiple primary nodes. Clusters with multiple primary nodes support high availability for HDFS NameNode. High availability for the NameNode resolves the checkpointing issue.

For more information, see Plan and configure primary nodes.

Remove unnecessary files from /mnt

The minimum available disk space for /mnt is specified by the dfs.namenode.resource.du.reserved parameter. When the amount of available disk space for /mnt drops to a value below the value that's set in dfs.namenode.resource.du.reserved, the NameNode enters Safemode. The default value for dfs.namenode.resource.du.reserved is 100 MB. When the NameNode is in Safemode, no filesystem or block modifications are allowed. Therefore, removing the unnecessary files from /mnt might help resolve the issue.

To delete the files that you no longer need, complete the following steps:

1.    Connect to the primary node using SSH.

2.    Check the NameNode logs to verify that the NameNode is in Safemode because of insufficient disk space. These logs are located in /var/log/hadoop-hdfs. If the disk space is sufficient, then the logs might look similar to the following log:

2020-08-28 19:14:43,540 WARN org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5baaae4c): Space available on volume '/dev/xvdb2' is 76546048, which is below the configured reserved amount 104857600

If the disk space is insufficient, then the logs might look similar to the following log:

2020-09-28 19:14:43,540 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem (org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5baaae4c): NameNode low on available disk space. Already in safe mode.

3.    Confirm that the NameNode is still in Safemode by running the following command:

[root@ip-xxx-xx-xx-xxx mnt]# hdfs dfsadmin -safemode get
Safe mode is ON

4.    Delete unnecessary files from /mnt.

If the directory in/mnt/namenode/current is using a large amount of space on a cluster with one primary node, then create a new snapshot (FsImage). Then, remove the old edit logs.

For example, you run a script that performs the following actions:
Generates a new snapshot.
Backs up old edit logs to an Amazon Simple Storage Service (Amazon S3) bucket.
Removes the edit logs.

Example script:

#!/bin/bash
hdfs dfsadmin -safemode enter
hdfs dfsadmin -saveNamespace
sudo su - root -c "hdfs dfs -put /mnt/namenode/current/*edits_[0-9]* s3://doc-example-bucket/backup-hdfs/"
sudo su - root -c "rm -f /mnt/namenode/current/*edits_[0-9]*"
sudo su - root -c "rm -f /mnt/namenode/current/seen*"
hdfs dfsadmin -safemode leave

Note: The preceding script doesn't remove logs for in-progress edits.

5.    Verify the amount of available disk space in /mnt. If the available space is more than 100 MB, then check the status of Safemode again. Then, turn off Safemode:

[hadoop@ip-xxx-xx-xx-xxx ~]$ hdfs dfsadmin -safemode get
Safe mode is ON
[hadoop@ip-xxx-xx-xx-xxx ~]$ hdfs dfsadmin -safemode leave
Safe mode is OFF

If /mnt still has less than 100 MB of available space, then perform one or more of the following actions:

Remove more files

1.    Connect to the primary node using SSH.

2.    Navigate to the /mnt directory:

cd /mnt

3.    Determine which folders are using the most disk space:

sudo du -hsx * | sort -rh | head -10

4.    Keep investigating until you find the source of the disk space issue. For example, if the var folder is using a large amount of disk space, then check the largest subfolders in var:

cd var
sudo du -hsx * | sort -rh | head -10

5.    After you determine which file folder is taking up the disk space, delete these files. Be sure that you delete only files that you no longer need. The compressed log files in /mnt/var/log/hadoop-hdfs/ and /mnt/var/log/hadoop-yarn/ are already backed up to the Amazon S3 logging bucket. These log files are good candidates for deletion.

6.    After you delete the unnecessary files, check the status of Safemode again. Then, turn off Safemode:

[hadoop@ip-xxx-xx-xx-xxx ~]$ hdfs dfsadmin -safemode get
Safe mode is ON
[hadoop@ip-xxx-xx-xx-xxx ~]$ hdfs dfsadmin -safemode leave
Safe mode is OFF

Check for corrupt or missing blocks/files

1.    Run the following command to see a report that helps you check the health of the cluster. The report also provides you with a percentage of under-replicated blocks and a count of missing replicas.

hdfs fsck /

2.    For each file in the list, run the following command to locate the DataNode for each block of the file:

hdfs fsck example_file_name -locations -blocks -files

Note: Replace example_file_name with your file name.

The messages that you see are similar to the following messages:

0. BP-762523015-192.168.0.2-1480061879099:blk_1073741830_1006 len=134217728 MISSING!
1. BP-762523015-192.168.0.2-1480061879099:blk_1073741831_1007 len=134217728 MISSING!
2. BP-762523015-192.168.0.2-1480061879099:blk_1073741832_1008 len=70846464 MISSING!

From the preceding messages, you can find the DataNode that's stored the block. For example, "192.168.0.2." You can then see the logs from that DataNode to search for errors that are related to the block ID (blk_xxx). The nodes are often terminated, resulting in missing blocks.

3.    To delete the corrupted files, exit Safemode. Then, run the following command:

hdfs dfs -rm example_file_name

Note: Replace example_file_name with your file name.

Use CloudWatch metrics to monitor the health of HDFS

The following Amazon CloudWatch metrics can help monitor the potential causes of a NameNode entering Safemode:

  • HDFSUtilization: The percentage of HDFS storage being used.
  • MissingBlocks: The number of blocks where HDFS has no replicas. These might be corrupt blocks.
  • UnderReplicatedBlocks: The number of blocks that must be replicated one or more times.

Related information

HDFS Users Guide (from the Apache Hadoop website)

AWS OFFICIAL
AWS OFFICIALUpdated 2 years ago