Questions tagged with Migration & Transfer
Content language: English
Sort by most recent
AWS Datasync when executing a task and not finding changes to make, counts and charges Requests on an S3 Bucket when performing a verified on the directory / where N files are located ?????
Dear, a task was created in Datasync which transferred 8 files within a directory and subdirectories. My query is when reviewing the logs of the Datasync task, I observe that different requests are generated (created, transferred, verified). When executing a new Datasync task where no transfer was made because no new changes were detected, but when reviewing the task log again, I observed that the request was made on the root (verified directory /). My question is, when Datasync does not find a change to make, it makes N requests on the total number of files and directories that are already updated. Example: if I have 15,000 files updated on a bucket, and when executing a task again that does not make changes, will AWS still charge me and count for listing all the files that have already been transferred? I attach images where I only have 22 objects including directories, files and subdirectories, where 43 tasks were executed, of which 40 did not transfer files (only verified directory /), which, checking my cost manager, made about 1840 Requests (Put, Copy ,Post or List Request) to Amazon S3 ![![![Enter image description here](/media/postImages/original/IMr18LLdAzQ029NZYHaYyj6w) Enter image description here](/media/postImages/original/IM6a6FqBFvR3y9M2cc_47f3g) Enter image description here](/media/postImages/original/IM5rh_GTLaQaq5fKUyCZolkA)
How to do offline export for Application Discovery Agent date?
We have a customer with security concerns, and who does not want the Application Discovery Service Agent to be able to access the internet. Can we export the collected utilization data from each server as excel or CSV file from the server itself, without using the AWS Console? The FAQS says that this is possible: *The Discovery Agent can be operated in an offline test mode that writes data to a local file so customers can review collected data before enabling online mode.* Link: https://aws.amazon.com/application-discovery/faqs/ Can anyone confirm that this can be done offline without outside access to the server? If so, how? Thanks in advance.
Trying to copy a postgres instance from a local postgres server to an RDS instance via DMS
`Greetings, thanks for looking at my question.` I've followed [this AWS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html) and set up a VPC, created a replication instance on DMS and set up the database server (it's on an EC2 instance currently, but ultimately we will be using a local server running postgres and the production database from our rack as the source) as the source endpoint and the RDS instance as the target endpoint. I'm trying to set up streaming replication, however, so the information on logical replication and CDC isn't relevant to my setup. ***The goal is to have our production database instance backed up and replicating to an RDS instance, down the road we hope to spin up EC2s and connect them to this RDS instance to scale up as needed. For now though we are just trying to get it set up for proof of concept.*** So I've done everything advised in the docs, but need to copy the database to the RDS instance and start the replication. I created a Database Migration Task in DMS (Full load, ongoing replication), but it fails and doesn't copy over the database or tables. This is the error message on the AWS DMS dashboard for the Migration Task: ``` Last Error Task 'FB24Y2N4*****************************HLB4R3Q' was suspended after 9 successive recovery failures Stop Reason FATAL_ERROR Error Level FATAL ``` Grateful for any advice from anyone who has successfuly set up a similar architecture. Thanks. *** *Edit to clarify: The problem isn't the error per se, it's that I need to find out why the DMS migration task that I created isn't initially copying the instance from my EC2 Postgres source to my AWS RDS instance endpoint and then beginning and maintaining replication. How do I get my database that I'm trying to replicate on RDS copied over?*
Master and Slave Instances Architecture
Hi Dears, I have a qustion on how to build master EC2 that can communicate with other EC2s based on need and send tasks to them based on a certian code/ group of tasks. Where Master EC2 will take data from RDS db and send data with task to each EC2 based on scudualing and aknowledgamet from each one. I appriclate your help dears! Thanks Basem
Datasync cannot configure scheduled tasks with an interval less than 60 minutes?
Datasync cannot configure scheduled tasks with an interval less than 60 minutes? Any alternative if I want my transferred data sync task to run every 5 minutes. ![Enter image description here](/media/postImages/original/IMogprZSMnToOv1UXUOQyKVA)
Migration of EC2, EBS-Backed instances to VPC - Issues
Hi - I followed advice from AWS on how to migrate EC2 Classic Instances (Linux) to VPC. Steps taken: * Create Linux AMI from EC2 Instance (worked) * Using EC2 Launch Instance wizard, I chose the new AMI and launched into newly created VPC (says it worked). *Associate Elastic IP with running image Issue is that I cannot see any instances, running or otherwise, in VPC - they all appear in EC2 Classic! Also, EC2 Classic is still running fine after Aug 15 cutoff date? Help!
Error with EC2 Classic migration to VPC - paravirtual
hi - I am trying to migrate EC2 classic to VPC (as per AWS) however getting the following error 'Step fails when it is Execute/Cancelling action. Property value 'paravirtual' from the API output is not in the desired values. Desired values; 'hvm'. Also the AWS advice states that the migration should be able to be done with minimal downtime but I am not finding that the case? Any resources to assist in these areas please?
AuroraDB serverless Jakarta region
Hi fellow AWS users and AWS employees, I would like to migrate our current development database from PostgreSQL RDS to AuroraDB. Also we are planning to migrate our infrastructure to Jakarta region (ap-southeast3). Unfortunately, we can't find an option about AuroraDB serverless in Jakarta region. Is there any plan to provide AuroraDB serverless service in Jakarta region in near future? Or alternatively, is there're any solution that we can use in AWS that handles auto-scaling relational database system? I hope someone can share some informations about this. Thank you in advance
Load S3 Bucket data into Aurora PostgreSQL tables
I need to load my data from S3 Bucket into the Aurora PostgreSQL tables. I read https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_PostgreSQL.S3Import.html#USER_PostgreSQL.S3Import.Reference this documentation and I will try. But, do we have any other way to be able to handle this case? Especially I want to ask, Is there any LOAD command which exists on Aurora PostgreSQL? Thanks!