Questions tagged with Amazon S3 Glacier

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I'm trying to copy from S3 a bucket with more than 200 milions of arquives. I've tryed use rclone to do that. But evertime after some 20 hours rclone stop for a memory erros. (my baremetal have 96GB of ram). I've tryed too AWS CLI copy but after one day running nothing happens. Somebody have a solution? Regards
1
answers
0
votes
17
views
profile picture
Mr Ost
asked 3 days ago
**Hello, I have a big request for you, could you please advise me with:** 1. What services do I need when I need to create a domain, an email for this domain and a website that will run both front and backend and it will be a WordPress site. At the same time, I would need automatically increasing storage space 2. How much could it cost me? I read about some queries and the like, but I don't understand it at all and I don't know where and who to turn to, the size of the page could be 3-8 GB max at the beginning **Thank you :)**
1
answers
0
votes
21
views
asked 9 days ago
Hi, I am new to AWS, I thought of using the Glacier for the Archiving of the Data. Decided to use the Standard( 3–5 hours) storage class. 1. As part of the Plan, I have jumped into the Glacier management Console, Created a vault. Just checked the storage Class, and I Did not find it anywhere can anyone help me with it? 2. Assuming it is Single Storage Class; how I can check the Data Retrieval information? FYI: I have used FastGlacier client for uploading the Files. Please let me know, what is the best practice to use the Glacier as an Archive service, and Clarify the way I am going is making sense?
1
answers
0
votes
23
views
asked 15 days ago
In the past, a DMS migration tasks has sucessfully created the transaction files in the cdc-files folder when the target endpoint is configured as an S3 bucket. After a redeploy and today, the tasks have stopped creating the cdc transaction files, however, the Updates and Updates Applied are correctly showing updates coming through cdc. Nothing has changed as far as I can see. In the cloudwatch logs there is nothing indicating that files are being deposited. ![Updates Applied](/media/postImages/original/IMK8R92EUoTe-o6LLtu7izUQ) Target Bucket Actual: ![No Files](/media/postImages/original/IMEbKCTAeSTz-SkEdtafCKGg) Target Bucket Expected: ![Expected](/media/postImages/original/IMfoJMj0fdSa-Fb2jK71DxCg) Edit: For some reason unknown, the cdc-files are now being deposited to the full load directory. No changes can be found but looking into target endpoint settings at the moment. Any guidance AWS? ![Here is where](/media/postImages/original/IM4_HF4WCsTai0nuHcVz-mTQ)
1
answers
0
votes
57
views
profile picture
asked 16 days ago
I configured my custom Lifecycle rule: files older than 1 day must be migrated to glacier. I uploaded a file yesterday (about 5pm). As of 6pm today, the file is still in the standard S3 area. My Lifecycle rule uses prefixes. I don't understand if the configuration is wrong or I have to wait for the technical execution time of the job. Please support me
Accepted AnswerAmazon S3 Glacier
3
answers
0
votes
37
views
luk3tt0
asked 16 days ago
We moved our files (~150 TB) from S3 Std to Glacier a few months back but we noticed we were still being billed for use of 21 TB of S3 Std storage. I checked the metrics on the bucket and it agrees with the billing and we still have 21 TB in S3 Std storage. I ran an inventory report and, other than about 200 MB of small chk files, everything is in Glacier. The bucket doesn't have versioning enabled currently and I'm not aware that it ever did, I set this up many years ago. How do I find these apparently hidden files?
1
answers
0
votes
30
views
asked 18 days ago
I am attempting to delete a Glacier bucket that I am no longer uploading/connecting to from the source device (Synology NAS). I undersatnd you can only delete a bucket via CLI. When entering the following command in CloudShell I am presented with the message "Unknown Options: 074597366642" aws glacier delete-vault --vault-name [vaultnamehere] --account-id [numberhere] Any help would be greatly appreciated! Thank you, Gavin
1
answers
0
votes
23
views
asked 20 days ago
Hi, when trying to delte my hosted zones, i get this error "Error occurred Bad request. (InvalidKeySigningKeyStatus 400: Key Signing Key with name datalabsai cannot be deleted because current status is not INACTIVE. You can use DeactivateKeySigningKey to deactivate the Key Signing Key before you delete it.)" I followed each step in the documentation but I am still not able to delete the hosted zone. Any solution???
1
answers
0
votes
20
views
asked 20 days ago
Good afternoon and thank you in advance, I have a "folder" in a bucket - lets say bigdata/ - that has files stored from 2 years ago and up until yesterday in S3 standard storage. I have applied a lifecycle policy to that folder to move the files to OneZone-IA storage. 1. The transition from S3 Standard to OneZone-IA requires items in the folder to have stayed for minimum of 30 days. That's ok, but the policy didn't seem to activate even on the files stored over a month ago. Does it start the counter from the day the policy applied? 3. 2. If I manually made the conversion from the AWS S3 console to bigdata/ to OneZone-IA, will it charge me any "early-conversion" fees for those files that are less than a month old? Thanks for your time. Derek
Accepted AnswerAmazon S3 Glacier
1
answers
0
votes
25
views
Derek J
asked 22 days ago
I am trying to delete Glacier archives so that I can delete my vaults. Upon running an inventory using CLI it seems there are over 16,000 archives in one of my vaults (I have no idea how this happened). Seeking assistance as it seems archives can only be deleted one at a time and that is not really feasible in this case! My goal is to delete my vaults so I am no longer paying for them. I have already dug fairly deep into the CLI and AWS documentation and learned a lot in the process (I am not a coder) but now that I see the scope of the issue I can see manual delete is not in the cards. Thank you in advance for any ideas!
1
answers
0
votes
31
views
asked 24 days ago
Hello everyone. I have two AWS accounts. Account A and account B. Account A has 30 Tb of information archived on S3 Glacier Flexible Retrieval (05 buckets). I need to move this information to account B on a S3 standard storage. Im trying this: 1. First, restore the objects from glacier with this command (one bucket at a time) ``` aws s3 ls s3://bucket1 --recursive | awk '{print $4}' | xargs -L 1 aws s3api restore-object --restore-request '{"Days":10,"GlacierJobParameters":{"Tier":"Standard"}}' --bucket bucket1 --key ``` 2. Sync the information from the bucket restored between the accounts with this command. I´ve created the restored-bucket1 bucket on account B and has all the policies needed applied to it. ``` aws s3 sync s3://bucket1 s3:restored-bucket1 ``` Even when the s3 console shows that the information has been restored ``` Object: Amazon S3 > Buckets > Bucket1 > file1.png Restoration status Completed Restoration expiry date January 14, 2023, 21:00:00 (UTC-03:00) ``` I still get the error: ``` warning: Skipping file s3://bucket1/file1.png . Object is of storage class GLACIER. Unable to perform download operations on GLACIER objects. You must restore the object to be able to perform the operation. See aws s3 download help for additional parameter options to ignore or force these transfers. ``` I have all the policies that allow file transfer between those accounts set up and running ok. I can sync other information that is not on Glacier with no issues. Anyone could help me?
3
answers
1
votes
38
views
asked 24 days ago
same problem stated on this thread https://repost.aws/questions/QUDpuRoadAQHK9g9kpnzcuGg/cant-upload-to-elastic-beanstalk unfortunately turning off CSP did not work for me as stated in the thread. any other suggestions?
0
answers
0
votes
19
views
asked a month ago