- Newest
- Most votes
- Most comments
For future reference and poor souls Googling to ends of earth like me, trying to trust the built in IoT-Core to DynamoDB will not bear fruit, go try to do it with Lambda instead, using the Boto3 python library and documentation, it's way more reliable and CloudWatch actually tells you if something is wrong.
Prerequisites - make sure that Lambda function has permissions to your DynamoDB! You can reuse the same rule as you would before using the wrong/outdated/borked system documentation.
Here's an example code:
import json import boto3 import logging from decimal import Decimal logger = logging.getLogger() logger.setLevel(logging.INFO) dynamodb = boto3.client('dynamodb') table_name = 'YOUR_DYNDB_TABLE NAME' def lambda_handler(event, context): # Initialize the DynamoDB instance db = boto3.resource('dynamodb') table = db.Table(table_name) # Extract the MQTT json message from the event primary_key = event.get('PRIMARY KEY') secondary_key = event.get('SECONDAR KEY') logger.info('PRIMARY KEY: :%s', primary_key ) logger.info('SECONDAR KEY :%s', secondary_key ) #Other data can be added of however you may need... logger.info('OTHER DATA :%s', other_data) #.json has to dumped and loaded again, in the case you have float data type in the received data, it has to be converted into Decimal! jsonObj = json.loads(json.dumps({ 'primary_key ': primary_key , 'secondary_key ': secondary_key, 'other_data': other_data, #More data here.... }), parse_float=Decimal) #DynamoDB Insert table.put_item(Item=jsonObj) return { .... }
Hi. I'm sorry to see that it was such a frustrating experience. I believe I know what the problem is. Step 1 of the tutorial instructs you to create a table with a partition key and sort key both of type Number
. Step 2 instructs you to create the rule and states:
In Table name, choose the name of the DynamoDB table you created in a previous step: wx_data. The Partition key type and Sort key type fields are filled with the values from your DynamoDB table.
However, the GUI is failing to fill the key types with NUMBER and instead leaves them at the default of STRING.
I'm not sure if this is a regression in the front end code, or a deliberate change that requires a change in the tutorial. I've raised an internal issue to pursue it.
In your question, you show the StartingRuleExecution
event type. Per the documentation, this only proves that the rule execution commenced, not that the rule thinks it succeeded. If you search your CloudWatch logs for event type RuleExecution
, I'm reasonably sure you'll find an error such as:
One or more parameter values were invalid: Type mismatch for key sample_time expected: N actual: S
If you correct the definition of the rule action, the problem is fixed, and you'll see that the DynamoDB rule action works just fine.
Nonetheless, I apologize for the inconvenience. There is a mismatch between the console behaviour and the tutorial instructions. I will get that fixed. Thank you for bringing it to our attention.
I had swapped and played around the partition key types before - when I had put them previously by trial and error, and today as STRINGS there was an expected error as you said "Type mismatch for key sample_time expected" in the CloudWatch logs, but even then was no change in the matter after the change into "NUMBER" type (as it was before)- no errors in the type mismatch which was to be expected to be solved, but alas no database entries here nor there. Would setting up CloudTrail in the DynamoDB be useful here to debug and watch what the underlying issue is and what gets lost in process?
You say there's no error in CloudWatch. Do you have event type
RuleExecution
for your DynamodDB action, with status of Success?
If everything is set up correctly according to the guide and the data is being published successfully (as indicated by your success log event), but you're still facing issues on the DynamoDB side, there could be several potential causes:
-
DynamoDB Table Configuration: Ensure that the DynamoDB table is correctly configured to match the expected data structure. The table's primary key (and sort key, if used) must match the fields you're expecting to insert/update with your rule.
-
Rule Query Statement: Double-check the SQL statement in your AWS IoT rule. Ensure that it's correctly formatted and matches the data structure you're publishing. Pay special attention to any potential issues with the SQL syntax, such as missing commas or incorrect field references.
-
Rule Action Configuration: In the rule action that writes to DynamoDB, make sure that the action is properly configured with the correct table name and role ARN. The role must have the necessary permissions to write to the DynamoDB table.
-
IAM Role Permissions: Since you've mentioned modifying the IAM role, ensure that the role used by AWS IoT to write to DynamoDB has the
dynamodb:PutItem
permission for the target table. Also, make sure there are no explicit deny statements that could be overriding the allow permissions. -
Data Types and Key Constraints: DynamoDB requires that the data types in the payload match the data types defined in the table schema. Additionally, if you're using any sort keys, ensure that the data you're sending includes these keys and that they conform to the expected data types.
-
Network Configuration: If you're using VPC endpoints or any other network configuration that could affect connectivity between AWS IoT and DynamoDB, verify that these settings are correctly configured and do not block the necessary traffic.
-
CloudWatch Logs: Check the CloudWatch Logs for both AWS IoT and DynamoDB. These logs might provide more detailed error messages that can help identify the issue. Look for any errors related to rule execution or failed write attempts to DynamoDB.
-
Testing with Static Data: As a troubleshooting step, try configuring the rule action to write a static piece of data to the DynamoDB table. This can help identify if the issue is with the dynamic data coming from the devices or with the rule action itself.
Given the complexity of AWS configurations and the multitude of places where things might go awry, these suggestions are starting points for deeper investigation.
Did exact same things - and redid again as I did before with only different thing double-checking the network configuration. There's nothing, but successes in my CloudWatch logs - just disappears after it goes past the rule like it's never existed. Still no entries using built in. With "*" statement, with word for word statements, putting raw publishing values - nothing works - period, giving all possible related permissions to both IoT rule and DynamoDB.
Got to work with IoT-Core to Lambda to DynamoDB instead. Something is incredibly broken in the backend and I cannot see what Amazon doesn't want people to see.
For future reference and poor souls trying to trust the built in IoT-Core to DynamoDB, either go S3 or go try do it with Lambda instead, using the Boto3 python library and documentation, it's way more reliable and CloudWatch actually tells you if something is wrong.
Thanks for the answer, but alas its seemingly borked or the documentation is incredibly outdated on Amazon's end.
Relevant content
- asked a year ago
- asked a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated 3 years ago
As per my answer, the rule action works fine. Your Lambda code is correct, but I would not advise using a Lambda for an action that the rules engine can perform, because a Lambda will generally cost more.