Elegant way to transfer data from IoT Core things to ECS container service

0

I am in search of elegant/best practice way to integrate data from my IoT Core to existing ECS cluster compute workload. Despite a variety of ways AWS provides, I still don’t feel confident and not sure in any to heavy dive into.

I have IoT Things (speaking MQTT) in IoT Core registry and successfully use their ‘Shadow’ feature to manage them, e.g. one of my containers in ECS cluster interface with ‘Shadow’ and I am relatively happy with it.

For now, I am adding features to my devices, so seek more frequent data transfers like extra telemetry, events to trigger by device and notify my cloud ECS task about it and etc. So ‘Shadow’ service does not fit very well. As of my current idea, I am considering a way to invoke ‘Basic Ingest’ from my device with IoT rule to redirect data to somewhere. But here comes the confusion as there is no elegant and transparent way to target that data to ECS destination directly. Any other way I am missing?

Generally speaking, I need advice for the subject. I already see that one of my potential future tasks will be to implement and manage own ‘Shadow’ feature inside my own ECS containers space and so bypass AWS managed IoT Shadow service entirely. It obviously requires an elegant and transparent way to move data back and forth from IoT Core (mqtt broker) and ECS compute container. I understand, it is very general topic, but hope experienced practitioners will share their thoughts and ideas.


Note: for reference, what I am considering:

(option 1). Use HTTP destination. I am already using API Gateway service to talk with my backend on ECS. So technically I can push IoT rule to existing HTTP API Gateway url, but this is kind of weird as basically it is for egress scenarios, isn’t it? My data would be already inside ‘my AWS space’ after IoT rule, so this will re-send it to url, plus would need extra efforts to protect that url to be reached out only by mine IoT rule (which by itself is a question and extra hassle). I sense it is not an optimal way, but potentially the most direct..

(option 2). Use intermediate service like buffer, e.g. Firehose or SQS. But their usage and price models may become a thing to consider. I will need some near real-time data to pass to my ECS container to take action as well some data to be like event that may happen randomly like once an hour or may be once a day (unknown time range). I would stay away from adding extra services in between as much as possible. Unless it is a good deal in result.

(option 3). Use Lambda or direct input to DynamoDB (??). Actually most of things that will happen in my backend will end up with some entries to DynamoDB (almost always). I have full access to my device firmware and can do some preparations there. Potentially can bootstrap ‘Basic Ingest’ in a way to be ready to put some data into DynamoDB (or with help of Lambda), but obviously it makes the entire project very coupled, e.g. firmware dependent. I like how ‘Shadow’ feature of IoT Core interfaces and divides mine IoT Fleet from my compute service, so seek for the same but more advanced.

(option 4). Kafka and related? Huge investment and most costly way, I am not big enough for it and I am sure it is an overhead. Overall I see AWS support it as an option, up to target to own implementation inside own VPC (which is kind of option with my ECS cluster, am I right?)

Am I on a wrong way for this?

3 Answers
4

Hello,

Use AWS IoT Rule to send data to an Amazon SQS queue, then have your ECS container service poll and process messages from SQS. This approach decouples your system, ensuring scalability and simplicity.

profile picture
EXPERT
answered 4 months ago
2

Hi Ize_hedgehog,

Please go through below steps once it will be helpfull to resolve your issue.

Using SQS with ECS Integration

Benefits:

  • Decoupling: SQS decouples the IoT data ingestion from the processing logic in ECS, allowing both systems to scale independently.
  • Reliability: SQS provides reliable message delivery, ensuring that no data is lost during the transfer from IoT Core to ECS.
  • Flexibility: This setup can handle varying data loads efficiently, and ECS tasks can process messages as they arrive or in batches.
  • Simplicity: Compared to Kafka or direct HTTP integrations, using SQS is relatively straightforward to set up and manage.

Implementation Steps:

Create an SQS Queue:

    • Go to the AWS Management Console and navigate to Amazon SQS.
  • Create a new queue (standard or FIFO based on your needs).

Set Up an IoT Rule to Route Messages to SQS:

  • In AWS IoT Core, create a rule to capture the telemetry data or events from your IoT devices.
  • Configure the rule to send the data to the SQS queue you created.

Example IoT Rule SQL:

SELECT * FROM 'iot/topic'

Set the action to "Send a message to an Amazon SQS queue."

Implement an SQS Poller in ECS:

  • Develop a service within your ECS tasks to poll the SQS queue for new messages.
  • The service can use the AWS SDK to read messages from the queue and process them.

Example code snippet for polling SQS (Python using Boto3):

import boto3

sqs = boto3.client('sqs')

def poll_sqs(queue_url):
    while True:
        messages = sqs.receive_message(QueueUrl=queue_url, MaxNumberOfMessages=10)
        if 'Messages' in messages:
            for message in messages['Messages']:
                process_message(message)
                sqs.delete_message(QueueUrl=queue_url, ReceiptHandle=message['ReceiptHandle'])

def process_message(message):
    # Your processing logic here
    print("Received message: ", message['Body'])

queue_url = 'https://sqs.<region>.amazonaws.com/<account_id>/<queue_name>'
poll_sqs(queue_url)

Security and Monitoring:

  • Ensure that your ECS tasks have the necessary IAM permissions to interact with the SQS queue.
  • Set up CloudWatch alarms and logs for monitoring the SQS queue and ECS tasks to ensure they are operating correctly and to detect any issues promptly.

Optional - Use Lambda for Event Handling:

  • For specific real-time event handling, you can also trigger AWS Lambda functions directly from IoT rules.
  • Lambda functions can process the data immediately or forward it to other services like ECS or DynamoDB.
EXPERT
answered 4 months ago
0

Hi. Some links for your consideration:

For your ECS consumption, the two answers already here (recommending SQS for decoupling), are good answers. However, since SQS is priced by messages, and IoT messages are generally small, I think Kinesis Data Streams will probably be cheaper. There's an existing rule action you can use: https://docs.aws.amazon.com/iot/latest/developerguide/kinesis-rule-action.html.

Potentially can bootstrap ‘Basic Ingest’ in a way to be ready to put some data into DynamoDB (or with help of Lambda), but obviously it makes the entire project very coupled, e.g. firmware dependent.

Basic Ingest will increase the coupling (and reduce your flexibility), but it can be a very large cost saving.

I like how ‘Shadow’ feature of IoT Core interfaces and divides mine IoT Fleet from my compute service, so seek for the same but more advanced.

For fast/frequently changing data, shadows can be expensive. They're best suited for slow/rarely changing status, configuration and command/control.

profile pictureAWS
EXPERT
Greg_B
answered 4 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions