-
Use the Amazon Machine Image (AMI) of the instance with issues to launch a rescue instance in your virtual private cloud (VPC).
Note: Make sure that the new instance is in the same Availability Zone as the instance with issues. You can also use an existing instance that's in the same Availability Zone as the instance with issues.
-
Detach the Amazon Elastic Block Store (Amazon EBS) root volume from the instance with issues. Note the device name, such as /dev/xvda and /dev/sda1.
-
Attach the Amazon EBS volume to the rescue instance as a secondary device, such as /dev/sdf.
Note: If your instance's root device is a volume backed by Amazon EBS, then stop and restart the instance.
-
Use SSH to connect to your rescue instance.
-
As the root user, run the following commands to identify the correct device name:
$ sudo -i
# lsblk
# rescuedev=/dev/xvdf1
Note: When you run lsblk, note the device name in the output. Replace xvsf1 with the device name of the device that's attached to your rescue instance.
-
To select an existing temporary mount point that's not already in use, run the following commands:
# rescuemnt=/mnt
# mkdir -p $rescuemnt
Note: It's a best practice to use /mnt as a mount point.
-
To mount the root file system from the attached volume, run the following command:
# mount $rescuedev $rescuemnt
If the volume mount fails, then run the following command:
dmesg | tail
If the logs show a universally unique identifier (UUID) that conflicts, then rerun the preceding command with the -o nouuid option. Example:
mount -o nouuid $rescuedev $rescuemnt
-
To mount special file systems, and change the root directory to the new file system, run the following command:
# for i in proc sys dev run; do mount --bind /$i $rescuemnt/$i ; done
# chroot $rescuemnt
-
Download and install the EC2Rescue tool for Linux on an offline Linux root volume.
-
Run EC2Rescue for Linux with no options to run all modules.
-
Based on the results, run the following command to activate remediation for the supported modules:
# ./ec2rl run --remediate
-
To exit chroot and unmount the secondary device, run the following command:
# exit
# umount $rescuemnt/{proc,sys,dev,run,}
Note: If the unmount operation fails, then stop or reboot the rescue instance before you unmount the secondary device.
-
Detach the secondary volume from the rescue EC2 instance.
-
Attach the /dev/sdf secondary volume to the original instance as the /dev/xvda or /dev/sda1 root volume.
-
Start the instance, and then verify that the instance works as expected.