- Más nuevo
- Más votos
- Más comentarios
Lots to unpack here. To answer your last question first, you can setup automated snapshots to backup your EBS volumes regularly (weekly, daily, hourly, or whatever you require) using Amazon Data Lifecycle Manager https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
Migrating from t2 to t3 isn't as straightforward as appears at face value. They use different virtualisation (t2 uses Xen, t3 uses Nitro) so the new instance would need the correct ENA and NVMe drivers https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/resize-limitations.html (it's a bit late now, but ec2 modify-instance-attribute --instance-id INSTANCE_ID --ena-support
might have helped you here).
So that would explain why your t3.xlarge wouldn't boot.
As for the items around mounting disks, for the first one you tried you say "connection wasn't possible", was the (old) volume in the same AZ as the (new) instance? It's amazing how easy it is to overlook that.
Then you could mount the old root volume as an additional disk on the new instance, but not the data volume which gave you bad superblock when you tried to xfs_repair
it. Two things come to mind here, first is it actually an xfs filesystem, and second could it have been part of a logical volume group on the original host?
Contenido relevante
- OFICIAL DE AWSActualizada hace un año
- ¿Cómo amplío mi sistema de archivos Linux después de aumentar el volumen de EBS en mi instancia EC2?OFICIAL DE AWSActualizada hace 8 meses
- OFICIAL DE AWSActualizada hace un año
- OFICIAL DE AWSActualizada hace 7 meses
Knowing the difference between the t2 and t3 generations, I tried attaching the previous volume to the t2 type instance again, but ssh access is still not possible. And the newly created volume and instance are in the same region. The root volume is ext4 and the data volume is xfs. In the case of the root volume, it is mounted, but ssh access is not available when creating an instance with the root volume.
Also it could be a uuid problem https://repost.aws/en/knowledge-center/ebs-mount-volume-issues I tried the method in , but it still doesn't work.
When we're back on the t2.micro and it's not booting, can you get the system log, Console -> EC2 -> select instance -> Action -> Monitor & troubleshoot -> Get system log
And/or you might be able to Console -> EC2 -> select instance -> Action -> Monitor & troubleshoot -> Get instance screenshot depending on the region
Is the data volume all on one partition on that disk, and is not a LV that spans multiple disks?
Both the root volume and the data volume are composed of different ebs and different partitions respectively. In the case of system log, most of them are start and stop, and there is no case of instance screenshot.
The root volume is mounted and there is no problem, but the biggest problem is that the xfs type data volume is not mounted.
So did you change the UUID of the old root filesystem (like in the link in your previous comment)? If you did, but the
/etc/fstab
entry still has the old UUID then that may be why the instance won't boot from the old disk.If
xfs_repair
fails to fix the old data volume, what aboutxfs_check
andxfs_info
?And what is the output of
file -s /dev/[data_volume_device]
?