- Le plus récent
- Le plus de votes
- La plupart des commentaires
First, for EC2 scaling, you can use AutoScaling.
AutoScaling allows you to scale by EC2 load and configure scheduled scaling.
Costs can be kept down by flexibly responding to the situation, such as using a single EC2 unit during times of low access and increasing to two units when the number of accesses increases.
https://docs.aws.amazon.com/ja_jp/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Applications can be deployed to multiple regions by using CI/CD with GitHub and CodePipeline.
https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-create-cross-region.html
Another way is to operate EFS in a replication configuration.
However, EFS is a bit more expensive and may be cost prohibitive.
https://docs.aws.amazon.com/ja_jp/efs/latest/ug/replication-use-cases.html
Contenus pertinents
- demandé il y a un an
- demandé il y a un an
- demandé il y a 8 mois
- AWS OFFICIELA mis à jour il y a 3 ans
- Comment migrer une instance Lightsail Windows vers une instance Amazon EC2 à des fins de dépannage ?AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 2 ans
Hi riku,
thank you for your answer. About Autoscaling we read this doc and starting to get a bit of a grasp on it.
Now we can set up manually or via cloudformation a setup for now the easy route is cloudformation to use the template of the Multi AZ stack. So this is clear. The part is we have now a build setup in lightsail, config done on the server etc. we would like to use this https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-cloudformation-stackssnapshot to the Multi AZ stack, Also the dbase we have separated. I found docs on it that this all needs to be set up manually and can not be done in a automated way. https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-creating-ec2-instances-from-exported-snapshots#aws-cloud-formation-stack
now there should be a way to automate this process, of course and to use already a robust solution which AWS has proposed/templated.
Now codepipeline we never used nor have experience with it. We would not use GIT but rather have a staging site where we test it on the instance and when this is all ok we push it onto the live build.
Now here is the catch, because the site will launch probably with about 7-10k products on it (we separate the dbase from WP instance) in the months to come this would grow to probably 25-30k products and end goal is around 75k. Now this would mean that the oru customer/vendor will be connected to our site from their own dashboard and update their own shop.
So this would mean they are a lot on the backend adding pics, and product entries. These should reflect in "realtime" on the site as well. I hope this is a bit more clear now, about the data replication and pushing this onto all the instances. As the WP CMS is connected to this separated dbase i "think" the updating of 1 instance is not the issue ONLY issue i can think of is the replication of this data onto the running instances. Example if there are always 2 running data should be identical on both of them. The staging site is just a solution probably for the front end part and ideally this would be running instance which we push onto the existing instance to update.