How to convert MediaStore lifecycle policy to s3 lifecycle rule

0

We're starting to test s3 as a live stream origin for our regular latency live workflows. Currently, we have a number of MediaStore containers with finely tuned lifecycle policies. I'm trying to convert some of these policies to s3 lifecycle rules. The logic seems similar which isn't surprising since MediaStore is built on top of s3. However, I couldn't find any examples in the s3 Lifecycle config doc which addresses the unique needs of live streaming workflows. Basically I'm trying to expire different media fragments and manifest files at varying intervals for different use cases. I'll continue messing around with the LifeCycle Configuration XML but it would be nice to have a quickstart guide. Seems like a good topic for an AWS Media Blog post from Media Service folks looking for extra credit ;-)

3 Answers
0

It seems you can use Elemental Live that will rotate files for HLS and DASH and by default will keep only the last 21 segments on S3. Have you checked that with your Elemental Live support? It is configurable when you create a Live event - https://docs.aws.amazon.com/elemental-live/latest/ug/elemental-live-ug.pdf

While using S3 you will be charge by https://aws.amazon.com/s3/pricing/. However, DELETE and CANCEL requests are free. So, while DELETE operations are free, LIST operations (to get a list of objects are not free varying a bit by region).

DeleteObject - Delete one file (You need to know the exact file name) DeleteObjects - Delete up to 1000 (You need to know the exact file names)

So, If you can wait, emptying a bucket with S3 Lifecycles is the cheapest and most efficient way.

By the way, I have ran some tests and S3 really performs better than MediaStore for some use-cases. I got 8s latency while streaming from OBS -> Medialive -> S3 -> Cloudfront.

In the meantime if you have any further information to help with the investigation feel free to reach out to me.

AWS
answered 2 years ago
  • I did some testing with our Elemental rep and can confirm that the keep_segments setting in Elemental Live does indeed work with S3. I lowered the parameter to 10 segments and saw that the DELETE or lifecycle setting works as expected; there were never more than 10 media segments in the S3 bucket.

    For anyone else who may be reading this, I'd recommend setting use_subdirectories = true in Elemental Live. This way the media segments are saved in sub directories while the manifest and .init files (for DASH packages) are saved in the root directory. This should make things easier if one plans to do any fancy lifecycle rules using prefixing.

    For the time being, it's good enough that we can configure the lifecycle of the media segments. Not sure how performance will be affected for our 24x7 linear streams if the manifest isn't expired at a more regular interval (we're currently expiring the manifest after 10 seconds in mediaStore).

0

Hi, It looks like you're trying to convert/import Mediastore Lifecycle policies to your S3 Lifecycle Policy. At this stage, you are right, and we don't have this feature yet. Sounds like a good feature request. Furthermore, I'll check if it maybe can become a Blog. S3 lifecycle rules are similar but has different purposes, as AWS MediaStore by design is for Media delivery content. MediaStore is a great choice for storing fragmented video files when you need strong consistency, low-latency reads and writes, and the ability to handle high volumes of concurrent requests. If you don't deliver live streaming videos, consider using Amazon Simple Storage Service (Amazon S3) instead.

Can I ask you one quick question? Please educate me on how are you upstream content to your S3 bucket? Is it Medialive or another encoder that is doing that?

Let me know if I've understood your question properly and I'll be happy to provide any other clarifications.

Jorge

AWS
answered 2 years ago
0

Hi Jorge. We will be using Elemental Live to output the video files to S3. To be fair, the use of S3 as a live origin probably won't be commonplace (if ever?) until maybe 2023-Q3 when the planned upgrades to MediaPackage are generally available and more folks start to re-evaluate their MediaStore workflows. I'm definitely jumping the gun here; mainly to see if using S3 can solve some other transit issues we've been experiencing with MediaStore. While I wait for our Direct Connect service to be setup, I figured I'd try some alternative workflows to see if there's any improvements.

Btw, the main difference I've noticed with lifecycle policies between MediaStore and S3, aside from the scripting format, is the granularity of the rules. With MediaStore, we have rules to expire content by the second but with S3 it seems you can only expire content in days. Not sure how this will affect our storage costs since we have a number of 24x7 linear streams but this doesn't look promising.

answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions