- Newest
- Most votes
- Most comments
Hi,
Lightsail currently does NOT support swapping an attached disk as the root disk of an instance.
Depending on how the sshd_config file was edited, you could try launching a new instance from an instance-snapshot of the locked-out instance and input the sshd_config file edit commands as user data commands during the new instance creation via the console's Add a launch script
option. Ref doc - https://lightsail.aws.amazon.com/ls/docs/en_us/articles/lightsail-how-to-configure-server-additional-data-shell-script
If this new instance comes up correctly and you can SSH into it, you could replace any IP and DNS setup with this replacement instance and when successfully migrated over, delete the then unused locked-out instance.
Thanks.
Here are the steps to swap out the original system disk with the recovery disk you created:
-Launch a new Lightsail instance in the same Availability Zone as the original instance. This will be your rescue instance. -After the rescue instance launches, go to the Lightsail Storage page and select the detached root volume (system disk) from the original instance. -Attach the root volume to the rescue instance, selecting the correct device path such as /dev/xvdf1
-Connect to the rescue instance via SSH using your key pair. -mount the attached volume to a directory such as /mnt using the command:
sudo mount /dev/xvdf1 /mnt -Chroot into the mounted volume directory: -sudo chroot /mnt
Once complete, detach the volume, terminate the rescue instance and attach the volume back to the original instance. This will replace the original problem disk
I just attempted this strategy with the rescue instance. sudo mount /dev/xvdf1 /mnt sudo chroot /mnt
Command only activated superuser, and started me in the recovered disk /mnt. No other changes occurred. Did I miss an argument or option with the commands above?
@Giovanni Lauria -
Once complete, detach the volume, terminate the rescue instance and attach the volume back to the original instance. This will replace the original problem disk
Above isn't accurate...the attached disk will not become the root disk on the original instance and hence will not replace it.
Relevant content
- asked 3 days ago
- asked 2 months ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated a month ago
This one sounds like it could work. Anyone's guess as to whether or not my port 25 adjustment would migrate over with my static ip and dns setup.