sshd_config file corrupted

0

I recently locked myself out of my lightsail instance by incorrectly disabling public key authentication in the sshd_config file. I was able to take a snapshot of the main disk, and mount it to a temp instance in order to change the sshd_config file back to its original state. Then I attached it back to my original instance with ssh access locked out. Is there a way for aws to swap my original "System_Disk" with the recovery disk? Disk path: /dev/sda1 for /dev/xvdf All the documentation contained regarding lightsail indicates the user must ssh into the instance, which to the best of my knowledge is impossible for me at the moment, unless I can find a away around openssh. I am hoping aws has a system admin role which could do this at the infrastructure level since it is purely a virtual machine and likely one of many on a bare metals server.

Ron
已提问 3 个月前265 查看次数
2 回答
0
已接受的回答

Hi,

Lightsail currently does NOT support swapping an attached disk as the root disk of an instance.

Depending on how the sshd_config file was edited, you could try launching a new instance from an instance-snapshot of the locked-out instance and input the sshd_config file edit commands as user data commands during the new instance creation via the console's Add a launch script option. Ref doc - https://lightsail.aws.amazon.com/ls/docs/en_us/articles/lightsail-how-to-configure-server-additional-data-shell-script

If this new instance comes up correctly and you can SSH into it, you could replace any IP and DNS setup with this replacement instance and when successfully migrated over, delete the then unused locked-out instance.

Thanks.

profile pictureAWS
专家
AWS-SUM
已回答 3 个月前
profile pictureAWS
专家
已审核 3 个月前
  • This one sounds like it could work. Anyone's guess as to whether or not my port 25 adjustment would migrate over with my static ip and dns setup.

0

Here are the steps to swap out the original system disk with the recovery disk you created:

-Launch a new Lightsail instance in the same Availability Zone as the original instance. This will be your rescue instance. -After the rescue instance launches, go to the Lightsail Storage page and select the detached root volume (system disk) from the original instance. -Attach the root volume to the rescue instance, selecting the correct device path such as /dev/xvdf1

-Connect to the rescue instance via SSH using your key pair. -mount the attached volume to a directory such as /mnt using the command:

sudo mount /dev/xvdf1 /mnt -Chroot into the mounted volume directory: -sudo chroot /mnt

Once complete, detach the volume, terminate the rescue instance and attach the volume back to the original instance. This will replace the original problem disk

profile picture
专家
已回答 3 个月前
profile pictureAWS
专家
kentrad
已审核 3 个月前
  • I just attempted this strategy with the rescue instance. sudo mount /dev/xvdf1 /mnt sudo chroot /mnt

    Command only activated superuser, and started me in the recovered disk /mnt. No other changes occurred. Did I miss an argument or option with the commands above?

  • @Giovanni Lauria -

    Once complete, detach the volume, terminate the rescue instance and attach the volume back to the original instance. This will replace the original problem disk

    Above isn't accurate...the attached disk will not become the root disk on the original instance and hence will not replace it.

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则

相关内容