IVS Real-Time Broadcast SDK


I'm using IVS real-time streaming and I would like server-side code (Lambda or EC2) to join a stage as a participant so that it can stream pre-recorded audio, and process individual participant's streams server-side.

Is there a way to do this?

The IVS real-time streaming Web broadcast SDK is browser only and doesn't work in a Node.js environment.

I've read about Server-Side Composition but this appears to mix audio and video from all stage participants and then sends this mixed video to an IVS channel. I'd like to process participant's streams individually server-side. Server-Side Composition also appears to send video to an IVS channel, and my participant streams will be audio only.


질문됨 3달 전116회 조회
1개 답변
  1. Use IVS Channel as a Transport Medium: While the IVS Real-Time Streaming Web broadcast SDK may not be directly usable in a Node.js environment, you can still leverage IVS channels as a transport medium for your server-side processing.

    1. Set Up a Node.js Server: Create a Node.js server that acts as an intermediary between your pre-recorded audio source and the IVS channel. This server will be responsible for: Joining the IVS channel as a participant. Receiving pre-recorded audio and sending it to the IVS channel as if it were a participant's audio stream. Processing individual participant streams as required.

    2. Stream Pre-recorded Audio: Your Node.js server can read pre-recorded audio files and stream them to the IVS channel. This allows you to simulate a participant's audio stream from the server side.

    4)Process Individual Participant Streams: While Server-Side Composition mixes audio and video from all participants, you can still access individual participant streams by subscribing to the IVS channel's low-latency HLS stream. Your Node.js server can consume this HLS stream, extract individual audio streams, and process them individually as needed. You can use libraries such as ffmpeg or GStreamer for audio processing tasks like noise reduction, transcription, or analysis.

  2. Custom Audio Processing: Implement custom logic in your Node.js server to process individual participant audio streams. This could include tasks such as real-time transcription, sentiment analysis, or any other audio processing you require.

6)Integration with IVS Channel:
    Ensure that your Node.js server integrates seamlessly with the IVS channel by handling stream ingestion, participant management, and any other necessary interactions with the IVS service.

By following this approach, you can achieve server-side processing of individual participant streams in an IVS environment, even without direct support for the IVS Real-Time Streaming Web broadcast SDK in Node.js. This allows you to leverage the capabilities of IVS while still having full control over audio processing and other server-side functionalities.

profile picture
답변함 3달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠