- 最新
- 最多得票
- 最多評論
Greetings, Based on the AWS documentation, it appears that Snowball Edge cannot write to buckets if you have turned on S3 Object Lock, regardless of whether or not the data on the Snowball is unique. This is likely because S3 Object Lock is intended to provide strict control over the retention and deletion of objects in S3, which could conflict with the data transfer process of Snowball.
If you do have object locked buckets, one option could be to create a new bucket for the import and then move the data into the correct bucket once it has been uploaded. However, it would be best to consult with AWS support to determine the best approach for your specific use case.
It is worth noting that the Snowball documentation does state that "If your security policies prevent Snowball Edge from accessing your bucket, you must create a new bucket that allows access for the duration of the job." Therefore, it is likely that creating a new bucket for the transfer would be the recommended approach if the original bucket is subject to S3 Object Lock or IAM policies that prevent writing to the bucket. Let me know if answered your question
Given that you can't upload your snowball files to the Object Locked S3 bucket - assuming you want to move everything to those buckets, you may want to investigate setting up a new bucket as the snowball ingestion, then use a lambda function to move (copy/delete) those to the Object-Locked S3 bucket. Obviously, you'll need the appropriate IAM perms.
相關內容
- AWS 官方已更新 3 年前
- AWS 官方已更新 1 年前
- AWS 官方已更新 1 年前
- AWS 官方已更新 3 年前
Thanks for your input I appreciate it. I assumed that would be the required course of action to be fair, and I really do appreciate the clarification that it is indeed the most likely route to take. In terms of documentation I see what you say, more that it would have been helpful to have a pointer to that where that warning is stated as otherwise its a bit confusing as to where to go forward once seeing that.