sshd_config file corrupted

0

I recently locked myself out of my lightsail instance by incorrectly disabling public key authentication in the sshd_config file. I was able to take a snapshot of the main disk, and mount it to a temp instance in order to change the sshd_config file back to its original state. Then I attached it back to my original instance with ssh access locked out. Is there a way for aws to swap my original "System_Disk" with the recovery disk? Disk path: /dev/sda1 for /dev/xvdf All the documentation contained regarding lightsail indicates the user must ssh into the instance, which to the best of my knowledge is impossible for me at the moment, unless I can find a away around openssh. I am hoping aws has a system admin role which could do this at the infrastructure level since it is purely a virtual machine and likely one of many on a bare metals server.

2 Answers
0
Accepted Answer

Hi,

Lightsail currently does NOT support swapping an attached disk as the root disk of an instance.

Depending on how the sshd_config file was edited, you could try launching a new instance from an instance-snapshot of the locked-out instance and input the sshd_config file edit commands as user data commands during the new instance creation via the console's Add a launch script option. Ref doc - https://lightsail.aws.amazon.com/ls/docs/en_us/articles/lightsail-how-to-configure-server-additional-data-shell-script

If this new instance comes up correctly and you can SSH into it, you could replace any IP and DNS setup with this replacement instance and when successfully migrated over, delete the then unused locked-out instance.

Thanks.

profile pictureAWS
EXPERT
AWS-SUM
answered 3 months ago
profile pictureAWS
EXPERT
reviewed 3 months ago
  • This one sounds like it could work. Anyone's guess as to whether or not my port 25 adjustment would migrate over with my static ip and dns setup.

0

Here are the steps to swap out the original system disk with the recovery disk you created:

-Launch a new Lightsail instance in the same Availability Zone as the original instance. This will be your rescue instance. -After the rescue instance launches, go to the Lightsail Storage page and select the detached root volume (system disk) from the original instance. -Attach the root volume to the rescue instance, selecting the correct device path such as /dev/xvdf1

-Connect to the rescue instance via SSH using your key pair. -mount the attached volume to a directory such as /mnt using the command:

sudo mount /dev/xvdf1 /mnt -Chroot into the mounted volume directory: -sudo chroot /mnt

Once complete, detach the volume, terminate the rescue instance and attach the volume back to the original instance. This will replace the original problem disk

profile picture
EXPERT
answered 3 months ago
profile pictureAWS
EXPERT
kentrad
reviewed 3 months ago
  • I just attempted this strategy with the rescue instance. sudo mount /dev/xvdf1 /mnt sudo chroot /mnt

    Command only activated superuser, and started me in the recovered disk /mnt. No other changes occurred. Did I miss an argument or option with the commands above?

  • @Giovanni Lauria -

    Once complete, detach the volume, terminate the rescue instance and attach the volume back to the original instance. This will replace the original problem disk

    Above isn't accurate...the attached disk will not become the root disk on the original instance and hence will not replace it.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions