Sagemaker Feature Store Spark connector is duplicating data

0

Hi!

I'm using the Feature Store Spark connector to ingest data into the Sagemaker Feature Store and when we try to ingest data to a Feature Group with the online store enabled, the data is duplicated. In the image bellow, "customer_id" is the ID feature, "date_ref" the event column. All the features are equal for the same ID and EventTime column, except the "api_invocation_time".

Duplicated data

If the feature group doesn't have the online store enabled, we ingest the data directly to the offline store without issues. But when we use the "Ingest by default" option in the connector (not specifying the "target_stores" in the connector, uses the PutRecord API), the data ingested is duplicated:

params = {
    "input_data_frame":dataframe,
    "feature_group_arn": feature_group_arn            
}

if not online_store_enabled:
    params["target_stores"] = ["OfflineStore"]
    logger.info(f"Ingesting data to the offline store")

pyspark_connector.ingest_data(**params)
logger.info("Finished the ingestion!")

failed_records = pyspark_connector.get_failed_stream_ingestion_data_frame()

How can I solve this issue using the connector?

EDIT:

Apparently, the problem is in the "get_failed_stream ingestion data frame" method. This method, instead of just returning a dataframe, ingests the data again before returning the failed records. Removing the method from the ingestion pipeline resolves the issue, although we lose a form of validation.

已提問 1 年前檢視次數 362 次
1 個回答
1
已接受的答案

This issue should be patched in 1.1.1. If you upgrade from 1.1.0, get_failed_stream_ingestion_data_frame should no longer trigger any re-computation now.

AWS
Can Sun
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南