Sagemaker Batch Transform for a tensorflow model

0

I have trained a model on our own data to take two embedding vectors as input and provide a probability score as output. So far I have been using the model hosted as a real time endpoint and querying it periodically using Lambda functions. However, the size of the data has increased exponentially (around 2.2 mil rows now) and I need to set the model up as a batch transform job. I can't find any good examples or details about how to do so for my particular case. The nature of my input data is as follows -> four columns: user_id, user_embedding, post_id, post_embedding in .parquet or .json format. The model takes the user_embedding and post_embedding as input and outputs the probability score. Can someone please point me in the right direction or tell me if there's a better solution?

The model is a tensorflow deep learning model whose artefacts are saved in an S3 bucket. The input data is also present in an S3 bucket.

Sarath
질문됨 10달 전341회 조회
1개 답변
1
수락된 답변

Hi Sarath,

  1. Create the model in SageMaker Console or using the CreateModel API, specify the right inference container image based on the model framework along with the s3 location that contains the model artefacts including the inference code.
  2. Create a BatchTransform job in the SageMaker console or using the CreateTransformJob API, parallelise the prediction using multiple instances and MultiRecord Batch strategy to speed up the batch inference based on the dataset volume
  3. Start the transform job

Check an example here.

AWS
답변함 10달 전
profile picture
전문가
검토됨 4달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠