1 Respuesta
- Más nuevo
- Más votos
- Más comentarios
1
Use presigned URLs, either from S3 directly (https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html) or with cloudfront (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html).
For listing, if using Python you can try paginators: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html, otherwise leverage SDK and paging: https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/the-response-object.html#response-paged-data
Contenido relevante
- OFICIAL DE AWSActualizada hace 2 años
- OFICIAL DE AWSActualizada hace 3 años
- OFICIAL DE AWSActualizada hace un año
Thanks Antonio, I know about presigned URLs... I'm sure I'll use this solution or CloudFront because it's more secure. My question is how to optimize the S3 LIST because on 200.000 of file every LIST is expensive. It scans through every list to filter by file name... The output isn't extensive, around 10 results... but it scans through everything to return them. The ideal scenario would be a supporting inventory file where the application can query without much economic effort. There's S3 Inventory, but it seems it can only update once a day. I need it to update six times